Keywords

Introduction

The process of automating tasks previously performed by people can present several potential hazards, many of which can be mitigated at the initial design phase. The emergence of artificial intelligence (AI) systems is expected to bring about significant changes in the roles and responsibilities of both instructors and students in the field of education. In the absence of precautions, certain AI-driven educational technologies have the potential to restrict instructors’ autonomy in making pedagogical decisions, while also introducing errors in classification or judgement. However, there are viable solutions available. This chapter provides an overview of the preliminary findings from a comprehensive examination of existing literature on the ethical concerns surrounding AI in education. The primary focus is on exploring the impact of AI on the autonomy and decision-making abilities of both teachers and students.

The field of education is susceptible to the swift advancement of AI methodologies and their capacity to execute progressively intricate tasks. The Beijing Consensus on AI in Education, as outlined by UNESCO (2019), recognises the potential advantages of AI in several tasks traditionally performed by students, instructors, and administrative personnel. AI can be applied in several ways in the field of education, such as smart tutoring, learning assessment, and student attrition prevention (Zawacki-Richter et al., 2019). These advancements prompt us to contemplate fresh inquiries: what if the educating faculty were freed from the laborious time dedicated to grading assignments? Is it possible for a student to receive immediate assistance, while at home, when they encounter a challenging maths problem? Can an AI support learners at the same level as a human teacher? These questions speak to some of the use cases for AI in education, but also highlight the ethical concerns that should be addressed when considering its implementation and widespread use.

This chapter aims to examine these concerns via the lens of preserving human agency (Engeström & Sannino, 2013), which is a significant challenge in the field of AI in education, on par with other challenges such as social justice, human complexity, and governance. We will sequentially provide contextual aspects, including definitions of AI in education, two theoretical benchmarks—the technician system and the idea of agency, and conclude with the findings of a literature review on the ethical concerns related to agency and AI in education.

Artificial Intelligence Applied to Education

AI is a term that can mean many things. Applied to education, it aims to accomplish complex tasks such as providing feedback and the differentiation of learning experiences that, until recently, were only performed by human beings. AI can be considered a set of techniques with more or less defined contours. Its most common techniques fall under machine learning, which can be supervised, semi-supervised, or unsupervised (Taulli, 2019). Deep learning by artificial neural networks can be used to process so-called big data (i.e., data characterised by the speed at which it multiplies, its volume, and its diversity). Humble and Mozelius (2019) approach it by emphasising the interdisciplinary character that goes beyond computers, ‘AIED is, as AI, an interdisciplinary field containing psychology, linguistics, neuroscience, education, anthropology and sociology with the goal of being a powerful tool for education and providing a deeper understanding of how learning occurs’ (p. 1). AI can also be defined by qualities other than its computing methods, such as the functions it performs in a system. Loder and Nicholas (2018) present AI as ‘computers which perform cognitive tasks usually associated with human minds, particularly learning and problem-solving’ (p. 11). For Popenici and Kerr (2017), AI in education consists of ‘computing systems that are able to engage in human-like processes such as learning, adapting, synthesising, self-correction and use of data for complex processing tasks’ (p. 2). It is the latter definition that we will retain because it allows us to overcome the opposition between human intelligence and AI and to consider the complex interactions between the two.

AI Through the Prism of the Technological System

We propose to consider these techniques from the angle of the technical system theory of Ellul (1977). According to this theory, techniques are constantly redefining the reality of the human experience. Ellul gives the example of television, made possible by the accumulation of techniques, whereby upon viewing, individuals end up no longer seeing these techniques. Television made possible a new form of communication that we ended up integrating, then trivialising to the point of not being interested in its operation any longer. In short, the techniques that made television possible, such as electricity or broadcasting antennas, end up taking root and redefining the actions and social relationships of its individual users. By applying a similar theory to AI, one might wonder if the complexity of AI techniques in education will change our relationship to the pedagogy and professional gestures specific to teaching. For instance, how might teachers reallocate their time if AI systems freed them from having to design and differentiate educational activities for their students? Furthermore, will educators stop being interested in docimology, the science of evaluation, because AI is capable of doing it for them? Unlike other educational technologies, the particularity of AI is that it is developed with the aim of accomplishing increasingly complex tasks, which then allows the teacher to concentrate on those tasks that AI does not handle well (e.g., high-complexity tasks that require a nuanced understanding of context, such as student relationships).

In this context, it is important to remember that teaching requires complex actions that are well-defined and familiar to educators. Hence, the integration of AI-based tools to accomplish these actions should not erode our understanding of this complexity, nor how we currently navigate and manage it in educational settings. As we embrace AI in education, we must be vigilant to not lose sight of the multifaceted nature of teaching by ensuring that technology complements, rather than overshadows, the contributions teachers bring to the educational landscape in terms of expertise and depth of knowledge.

Agency to Understand and Situate Human Activity

The definition of AI in education by Popenici and Kerr (2017) introduces the idea that computer systems simulating human intelligence take place within human systems. We consider each of these systems as agents of one another. In computer science, the term agent designates a system with a certain amount of autonomy capable of carrying out actions that will have an impact on its environment. The future decision-making process is consequently impacted by these actions (Ferber, 1995, p. 13). This is one of the particularities of so-called intelligent complex systems: they do not just reason (Ferber, 1995, p. 13); they act and transform their environment.

In the social sciences, the concept of agency also refers to a form of autonomy, but this time on the part of people. According to Engeström and Sannino (2013), agency is a voluntary search for transformation on the part of the subject and manifests itself in a polymotive problematic situation in which the subject evaluates and interprets the circumstances, makes decisions according to the interpretations, and executes those decisions (p. 7). For example, a teacher might have two seemingly conflicting goals; the desire to provide students with personalised feedback, while also wanting to return their grades as quickly as possible. In such situations, when motives or goals conflict, teachers can resolve them by taking actions that show their agency. For AI in education, this could mean empowering the student or teacher to have a greater impact through the use of AI systems. While this example describes an ideal scenario, current use cases for AI in education tend to focus on helping educators make better pedagogical decisions, automate time consuming or laborious tasks, or to analyse large data sets with the goal of improving learning outcomes.

Research Question

At first glance, current educational use cases of AI are likely to encroach on the agency of students and teachers, especially when it comes to the selection and assessment of educational resources or activities. To address these initial challenges, we will attempt to answer the following research question: what use cases for AI in education are likely to limit the agency of teachers and students?

Method: A Literature Review on Ethical Issues

This chapter is based on data collected during a systematic literature review project around the terms ‘ethics, AI, and education’ in the Google Scholar, Web of Science, Microsoft Academic, EBSCO Education, Dimensions databases, AI, and Scopus, and by completing with relevant references identified by the team (Michel & Le Nagard, 2019). Papers are peer-reviewed scientific articles or conference proceedings. They are written in French or English and published between 2010 and 2021 (N = 58). Articles were read and then segment coded using nVivo software by two people. While the review will be the subject of another publication, this chapter offers a specific, in-depth, and original analysis of the issues relating to the preservation of human agency. For the purposes of this chapter, there were 24 documents (n = 24) that were retained for analysis consisting of 62 coded segments.

Results Related to the Agency of Teachers and Students

This section presents, in order, the results relating to teacher agency and then those relating to student agency. It aims to report, as faithfully and objectively as possible, without interpretation, the ideas conveyed in the literature.

Results for Teacher Agency

Without specifying the type of AI tools in question, several of the documents consulted acknowledge the risk AI poses to reducing teacher agency as the development of complex computer systems shifts portions of their decision-making power to software development teams.

Integrating AI systems into education could exacerbate a power imbalance and create new inequalities. Similarly, AI systems can shift the centre of expertise from teachers and school administrators to programmers or system designers, the latter two being responsible for creating the models that diagnose learning outcomes, predict school achievement, and determine which recommendations will be displayed and to whom. (Berendt et al., 2020, p. 317). This is similar to the role of intelligent tutors who select teaching materials in place of the teacher and identify and diagnose at-risk students. Consider the example of a mathematics teacher who creates a series of exercises for solving quadratic equations. A priori, one would think that this is a task that could be automated or, at the very least, that existing teaching materials could be identified and reused. This may be true, but we should also try to understand why the teacher chose to create new material rather than reuse existing ones. One reason might be that the teacher is working with a multicultural group and cannot find materials with cultural references relevant to her class. The teacher might also choose to use a series of exercises that are too easy for her students for pedagogical considerations, such as providing students with a temporary boost to their confidence. As these examples demonstrate, teachers are able to exert their agency in polymotive situations whereas an AI-based system might only consider a didactic motive.

According to Berendt et al. (2020), the use of AI in education could also lead to a decline in teacher skills as they become too dependent on AI systems to the detriment of their own expertise. There is also reason to believe that automation bias (Parasuraman & Manzey, 2010), or the over reliance on automated decision-making systems, could become an issue for educators who lack the necessary training or agency to challenge the decisions of AI systems. Moreover, if this bias is not recognised by schools, they run the risk of encouraging the use of imperfect AI tools under the illusion that they are providing better predictions or results (Jones et al., 2020).

The agency always places the action in a broader context. Knox (2017) reminds us that software, algorithms, and databases are always used in broader contexts than it seems, but they are often seen as too detached from education as such. Student data is submitted, and teachers are encouraged to react to this data without having been part of the process responsible for producing it.

For Corrin et al. (2019), the use of AI-based tools must always involve human intervention for the review of contested decisions and classification errors. Gras (2019), drawing on the General Data Protection Regulation, speaks of the need for the maintenance of human control (p. 4), and Knox (2017) points out that the possibility of refusing data recommendations from an AI system must be preserved for teachers without the fear of negative consequences. Based on a similar concern, Sjödén (2020) goes so far as to question who should take precedence in the event of a discrepancy: such as an assessment grade, a recommended intervention, or a diagnosis of risk.

The importance of preserving agency is underlined by several of the consulted documents. Adams et al. (2021) talk about giving teachers the choice of whether or not to use AI-based tools. Aiken and Epstein (2000) state: ‘at all cost we must preserve the human capacity to solve problems and think rationally’ (p. 166). Holmes et al. (2021) invite us not to fall into a glorification of progress in computer systems that would diminish the role of humans. Yet, Smuha (2020) contrasts with the other reviewed documents by stating that as long as educators retain the ability to choose whether or not to use and trust AI recommendations, then agency can be increased: ‘As long as human beings can meaningfully decide when and under what conditions decisions are delegated to an AI-system, human agency is not only preserved, but can even be empowered’ (Smuha, 2020, p. 8). Finally, amongst all the consulted documents, there is consensus on the need for AI developers to design tools that increase user agency, not restrict it.

Student Agency Results

Students should also be able to choose whether or not to act in accordance with the recommendations of an AI system (Roberts et al., 2017). This is because, within the learning environment, the use of AI can reduce student agency. According to Bulger (2016), this is the case when an AI system assigns school tasks. In higher education settings, Roberts et al. (2017) highlight the risk of infantilising students if AI systems gamify learning experiences when it is neither necessary nor wanted by learners. West et al. (2020) also mention this risk, emphasising the relevance of student voices in the process of regulating learning. Their perceptions, comments, and experiences should be considered when making pedagogical decisions and should not be diminished by the use of AI systems. Similarly, students should also have the choice of whether or not to accept AI recommendations.

Currently, predictive systems that rely on data can lead to erroneous programme or course recommendations for students (Jones et al., 2020). This is generally a problem of filter bubbles, where recommendation algorithms succeed in identifying preferences from previous data sets, but fail to suggest new interests. This is the case for recommendations on music platforms or streaming services where these types of algorithms actually limit agency (Jones et al., 2020) or, at the very least, participate in redefining the environment in which agency is exercised. Regan and Jesse (2019) point out that these uses, even if they may seem trivial, have an impact on people's ability to manage their lives freely.

Regan and Jesse (2019), building on the work of Kerr and Earle (2013), present three types of predictions that can affect agency: predictions that allow people to anticipate negative consequences, predictions that direct people towards specific decisions, and prescriptive predictions that reduce the possible choices people can make. According to Regan and Jesse (2019), consequence-based predictions reduce people's agency a little, while prescriptive predictions reduce it a lot. Take, for example, the difference between a system that automatically recommends a series of learning resources without hiding other potential resources and another that selects and integrates them into a so-called personalised learning pathway. The first maintains a certain agency, while the second reduces it on the basis of making decisions on behalf of the learner.

At the level of didactic use, Sjödén (2020) notes that AI-based systems can integrate false information into the learning environment. He presents three types of processes that AI systems could use and which could pose ethical problems: cases where the systems lie, i.e., present deliberately inaccurate information; cases where they hide information by selecting which data to present; and cases where they maintain erroneous beliefs. Sjödén (2020) asks the question, ‘To what extent are such illusions ethically justifiable to maintain?’ (p. 293). Here, the link with agency stems from the authenticity of the environment in which agency is exercised. This raises the question, is agency supported by partial or false information really agency?

Reiss (2021), drawing on Puddifoot and O’Donnell (2018), points out that tools that pursue an intention to facilitate learning can hinder the intellectual activity necessary for the formation of concepts:

Puddifoot and O’Donnell (2018) argue that too great a reliance on technologies to store information for us – information that in previous times we would have had to remember – may be counterproductive, resulting in missed opportunities for the memory systems of students to form abstractions and generate insights from newly learned information. (p. 4)

Even in the presence of tools aimed at facilitating the task of learners, it may be temporarily relevant to maintain a specific intellectual activity for the development of certain logical structures of thought. For example, a tool that pre-identifies the important passages in a text to avoid the student having to read it completely might not be desirable if the pedagogical intention is to develop the student’s ability to synthesise. Similar to the concerns raised before the introduction of the calculator, Smuha (2020) talks about the risk of developing intellectual laziness by interacting with more efficient machines for the performance of certain tasks. According to parents of students, some tools could even be obstacles to learning if students do not develop a critical perspective or place too much trust in them (Qin et al., 2020)

However, some parents worry that AIED systems may make students overly dependent on AI-based systems and lack independent thinking, which brings out parents’ unwillingness to continuously trust in AIED systems. (p. 1699)

Certain risks are added when it comes to higher education. Overly guided systems that interfere in the organisation of school work could be perceived as infantilising by stakeholders (Roberts et al., 2017). This risk is also supported by West et al. (2020), who assert that learners must be perceived as people capable of regulating their own learning. Roberts et al. (2017) seek to ensure that students are never forced to act in accordance with the recommendations of a system or on the basis of its performance indicators. It is also important to emphasise that any systems used be valid, reliable, and capable of performing the tasks for which they were designed in a real context (Smuha, 2020).

Discussion: Some Nuances and Ways to Guard Against Risks

In light of the results, this section systematically returns to two of the elements addressed earlier, namely the technical system and the concept of agency. It also presents some ways to support the development of AI in education while preserving the agency of students and teachers.

Discussion of the Technical System

The emergence of AI, primarily driven by machine learning techniques, enables tools that increasingly shape the context in which individuals engage. As these tools advance, they progressively define and influence the educational landscape. In line with Ellul's perspective (1977), teaching can be viewed as a set of techniques and strategies encompassing pedagogical approaches, evaluation methods, but also digital tools. However, as these digital tools, particularly those involving AI, grow in complexity, there's a tendency for the underlying intricacies to become obscured, opaque, or even dismissed entirely. In this sense, AI-based computer systems applying these techniques in an educational context cannot be considered only as tools. They redefine several parameters of the educational situation, including the time required to accomplish a task, the need to memorise certain information, the need to ask for help in order to accomplish a task, or the possibilities of social interactions.

The use of AI in education also challenges the powers of educational stakeholders. Some tools represent ‘a form of privatization and commercialism by shifting control over curriculum and pedagogy from teachers and schools to for-profit corporations’ (Saltman, 2020, p. 199). While this shift of power may be desirable for reasons of efficiency or innovation, it must be done cautiously by assessing all the potential consequences that may result from such actions. Likewise, the idea that AI can only be seen as a way of improving the teaching and learning experience should be challenged. Not only is this conception insufficient to describe the impact these tools could have on teaching and learning, but it fails to consider AI as a set of techniques that alter the actions of students and teachers, while also failing to acknowledge that teaching and learning are themselves products of techniques.

Discussion in Relation to Agency

Agency, as we have presented it, involves taking action in situations with conflicting motives. Any use of AI that removes options to take such initiatives limits agency. However, it is possible to mitigate this risk by having AI developers consider the importance of student and teacher agency at the earliest stages of their tool development. According to Kerr and Earle (2013), this could be done by designing systems that empower users to exercise their own judgement and choice rather than relying on predictive systems of consequence, preference, or preemption. This is why, within the realm of learning analytics, the development of dashboards must incorporate the real needs of learners.

Avenues for the Development of AI-Based Tools that Preserve Agency

The results of our literature review raise several risks relating to the maintenance of human agency through the use of AI in education, both for teachers and for students. To counter these risks, we propose three recommendations for designing AI-based educational technologies that preserve, or even reinforce, human agency.

First, AI should not be responsible for making autonomous educational decisions, but rather, focused on improving the quality of information that people use in order to make more informed decisions. This can be done by presenting the probable consequences of decisions (Regan & Jesse, 2019), and providing transparency on the data utilised to make those decisions.

Second, with respect to the assessment of learning, the use of AI for rapid, personalised, and frequent feedback, in contexts where teacher feedback is unlikely, seems promising. As in other AI use cases, these systems must be transparent, in particular to students, regarding how the feedback was produced. Consistent with the first recommendation, assessment systems should be focused on providing students with quality feedback and not making decisions related to grading or being used to replace professional judgement.

Finally, in regard to learning aids intended for students, they can be developed with the caveat that this usage of AI seems uncertain at the moment. The pedagogical intentions of the curricula do not always allow, at a fine-grained level, to distinguish which intellectual activity is essential from the instrumental or already acquired. The uses of AI aimed at engaging students in techno-creative projects (Romero et al., 2017) and participating in their own skill development seem more promising in the short term, particularly when combined with uses previously intended for the teacher. In higher education, there is the added risk of infantilising or providing an excess of guidance, which highlights the need for robust learning analytics to determine the real needs of students.

Conclusion

In an educational situation, both teachers and students demonstrate agency on a daily basis. Despite numerous efforts to model educational problems and automate solutions (e.g., the evaluation of learning), it must be kept in mind that these problems can be approached from several points of view that often involve intangible human dimensions and conflicting motives. It is through agency that these situations are resolved on a daily basis. We must therefore avoid the reductionist trap of designing didactic tools detached from the context in which they are used.

While there are a number of recommendations for doing this, we have proposed three: (1) use AI to improve the quality of information rather than making educational decisions, (2) use AI for formative feedback and not for certification assessment, particularly when training students and teachers from a critical perspective, and (3) favouring uses that engage students and allow them to develop a critical perspective on the role of AI in education.