Abstract
Generative Artificial Intelligence (GAI) has become popular recently with the advances in text and image generation tools (e.g., ChatGPT) that are easy to use for the general public. The emergence of GAI has sparked a surge in academic studies within higher education (HE) but also raised concerns about the changes related to policy making. This chapter analyses the impact of GAI on HE, addressing its uses in language learning, chatbot applications, and responsible AI implementation. Evaluating both its benefits and limitations, this chapter navigates through diverse studies, presenting insights into GAI's potential in education, while emphasising the need for responsible deployment and ethical considerations.
You have full access to this open access chapter, Download chapter PDF
Keywords
Introduction
Generative Artificial Intelligence (GAI) was not a popular discussion topic among faculty in Higher Education (HE) until the emergence of tools such as ChatGPT. Since 2021, the academic discourse both in relation to policies in the use of AI, but also in relation to the potential opportunities of AI for education, has started to rise as a research topic (Southworth et al., 2023). Given the widespread availability of generative AI tools like ChatGPT, it is imperative for Higher Education Institutions (HEI) to carefully examine the practical applications and possible difficulties of AI for both professors and students. Although we must acknowledge the potential applications of AI, it is crucial to address the issue of regulating its use in the context of university work undertaken by undergraduate and graduate students. Rudolph et al. (2023) point to the challenges facing HE institutions that continue to use traditional assessment strategies when the availability of tools such as ChatGPT makes it difficult to evaluate the originality of student work. Conversely, AI can be used to scaffold student learning and create more personalised HE experiences.
GAI typically refers to advanced technology that integrates deep learning models, trained on extensive datasets, gathered from various public sources, user-generated content, licensed third-party data, and information created by human reviewers (OpenAI, 2023). This technology processes human inputs, commonly known as prompts, and generates outputs that closely mimic human-generated content, predominantly in the form of text and images (Lim et al., 2023). Due to their large scale, software developers building these tools utilise models that frequently lack direct insights about the quality and type of data used for training. Additionally, they are often unable to meet data retention or privacy requirements given their inability to store these data models in their independent computing environments. Hence, similar to using a search engine, a typical user currently relies on remote server interactions when exchanging data through AGI tools. This raises significant concerns regarding privacy and the potential for information leakage. This issue is particularly acute with AGI tools like ChatGPT, which require more detailed text input, in contrast to typical search engines that respond to relatively brief search queries.
In this chapter, we address these different domains by analysing the uses of AI in HE, with a special focus on generative AI. The second section addresses AI for language learning and translation in HE. The third section explores the use of conversational agents (i.e., chatbots) at the university level, while the last section addresses the responsible uses of AI in HE with a special focus on the role of assessments.
Uses of AI in Higher Education
In their 2023 literature review, Baidoo-Anu and Ansah focused on General Artificial Intelligence (GAI) and education-related papers published in English-language, peer-reviewed journals. The review aimed to achieve two primary objectives: to evaluate the various methods of interacting with ChatGPT, and to discern the advantages and disadvantages of integrating GAI into educational practices. Within this framework, the authors introduced a typology categorising the observed benefits of employing GAI in education, encompassing areas such as personalised tutoring, Automated Steady Grading (AGS), language translation, interactive learning, and adaptive learning. Simultaneously, the review outlined a series of limitations associated with these AI applications, including the lack of human interaction, limited understanding of the technology, potential biases in training data sets, lack of creativity, dependency on the data available or generated for AI training, lack of contextual understanding, limited ability to personalise instruction, and privacy concerns. This detailed exploration of both benefits and limitations contributes to a more nuanced understanding of the impact of General Artificial Intelligence (GAI) in the realm of HE andragogy.
Several exploratory studies have applied prompt engineering (Lee et al., 2023) to more effectively examine how the GPT-3.5 model aligns with educational objectives and its suitability for such purposes. The primary methodology involves analysing the model's responses and evaluating their congruence with educational objectives. This assessment is conducted through a self-study approach as introduced by Hamilton et al. (2009). Cooper (2023) utilised ChatGPT by presenting it with a series of questions designed to elicit responses providing practical guidance for teachers on classroom applications. Findings suggest that ChatGPT performs well in generating teaching units, rubrics, and quizzes. Additionally, the results indicate that ChatGPT can aid educators in the creation of science education units structured around the 5Es model (Engage, Explore, Explain, Elaborate, Evaluate), thereby assisting in the transition from initial ideas to fully developed educational units.
Similarly, Qadir (2023) examined ChatGPT's role in engineering education, underscoring its potential as a generative AI tool across technically demanding educational settings. The research employs structured prompts as a method for eliciting detailed AI responses that facilitate various real-life educational applications. Although not based on a specific pedagogical framework, the study provides empirical evidence of ChatGPT's utility in engineering education. Its applications span technical subjects like coding and mathematics, creative writing, virtual tutoring, personalised learning, test preparation, and language learning. Conversely, some of the disadvantages include the lack of human interaction in providing personalised feedback to learners. Moreover, as the study involved an older model of ChatGPT (i.e., GPT-3.5 architecture), concerns regarding reliability, plagiarism, and hallucinative misinformation were noted as potential shortcomings when using automated feedback. The paper also emphasises the growing importance of prior knowledge and critical thinking, given their importance in creating prompts that generate quality responses.
Chan (2023) has studied the use of text-generative AI technologies in Hong Kong universities to develop a framework for the integration of AI into education. The study engaged 457 students and 180 faculty and staff members. Chan has identified three dimensions: the pedagogical dimension, the governance dimension, and the operational dimension in the integration of AI in higher education. Within the context of HE andragogy, Chan’s framework (2023) highlights the importance of reengineering the assessment process given the innovative methodologies made possible through the use of AI. In this sense, both the automatic analysis, as well as the exploitation of learning analytics for assessment purposes (Ouyang et al., 2023) were examined. Similarly, the pedagogical dimension is also where Chan stresses the importance of developing the transversal competencies considered critical for future success in the innovation economy (Septiani et al., 2023).
Within the governance dimension, senior leadership plays a pivotal role in addressing the complex considerations related to AI implementation in education. This encompasses strategies to understand, identify, and prevent academic misconduct and ethical dilemmas facilitated by AI, as well as the establishment of robust policies and protocols for data privacy, transparency, accountability, and security in AI usage. In this sense, the AI Act, developed at the European level, aims to develop trustworthy AI (Laux et al., 2023). The AI Act postulates different requirements for ‘trustworthy’ AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, accountability, non-discrimination and fairness, societal and environmental well-being, and accountability (EC, 2019). Furthermore, the governance dimension extends to AI attribution and the clear definition of roles and responsibilities for the technology’s implementation and management, thereby ensuring accountability for its ethical use within the institution. Equally significant is the commitment to ensuring equity in access to AI technologies, achieved through implementing measures that guarantee fair and inclusive access to AI resources for all students and faculty, while also addressing potential disparities in AI utilisation across different demographic groups.
The operational dimension, involving teaching and learning stakeholders as well as IT staff, centres on the practical aspects of AI implementation in university settings. This includes establishing robust monitoring mechanisms to assess the effectiveness of AI integration and continuously evaluating its impact on teaching, learning, and overall educational outcomes. Moreover, this dimension encompasses comprehensive training programmes and ongoing support structures that enhance AI literacy among faculty, staff, and students, while addressing the challenges and benefits of proficient AI use across the university.
Sabzalieva and Valentini’s (2023) guide on the use of generative AI in HE develops an introduction to the technology and provides different use cases as an interactive tool fulfilling the role of tutor, socratic opponent, or even co-designer.
AI for Language Learning and Translation
The prevalent use of English in research studies, publications, and competitive grants at the international level creates an inclusivity barrier, not only for faculty and students, but also for administrative staff who lack the necessary proficiency in English to fully participate in the academic process (Ingvarsdóttir & Arnbjörnsdóttir, 2013). This linguistic dominance reduces the opportunities and competitiveness of non-native English speakers and limits access to learning resources for students who lack English fluency. As such, AI translation tools, such as Grammarly or Quilbot, can play a pivotal role in facilitating accessibility, but also improving the overall quality of written work, from both a grammatical and contextual perspective. Implementing AI-driven translation tools can reduce language barriers such as in the case of the MSc Smart EdTech programme at the Université Côte d’Azur, where most of the students and faculty are using English as a second language. With an international cohort representing eighteen different countries, the use of real-time translation tools during synchronous, online activities helped create a more inclusive learning environment for non-fluent, English speakers. Similarly, the use of automatic note taking tools has facilitated the creation of meeting minutes and made documenting projects easier in the context of research initiatives such as the Horizon AugMentor project.
Chatbots in Higher Education
Expanding from the pioneering work of Eliza (Weizenbaum, 1966), acknowledged as the first chatbot in academic research, the exploration of chatbot-assisted learning environments has progressed significantly over the last few decades. Nevertheless, it is in recent years that chatbots have experienced a substantial surge in both usage scenarios and research within the educational realm, reaching a pinnacle in 2023 (Hwang & Chang, 2021). The most common application of chatbots in educational settings is as a tool used to interact with predefined content learning paths, often referred to as guided learning (Akcora et al., 2018). The current availability of generative AI chatbot services and chatbot application programming interfaces (API) enables educators to add a new layer of chatbot capabilities that can support active learning approaches in education (Lo., 2023). Chatbots were created to enable natural language interaction between humans and computers. As such, chatbots can be considered as computer programmes that aim to mimic some aspects of human interaction supported by machine conversation systems, virtual agents, dialogue systems, and personal assistants all with the goal of supporting the end-user (Suhaili et al., 2021). The systematic literature review by Okonkwo and Ade-Ibijola (2021) outlines the diverse applications of chatbots in HE, including teaching and learning (66%), research and development (19%), assessment (6%), administration (5%), and advisory (4%).
Integrating chatbots into HE settings enables universities to address the specific uses outlined in the UNESCO table on the use of generative AI (see Table 10.1). This is possible due to the versatile nature of these tools and the adaptive experience they offer to users. Furthermore, the ability of chatbots to understand and generate human-like responses, known as Natural Language Processing (NLP), creates a more intuitive way for students and educators to interact with the tools (Maher et al., 2020; Rath et al., 2023), while also facilitating the personalisation of the learning experience based on individual student needs (Younis et al., 2023).
The inclusion of NLP capabilities in the training of chatbots has benefited developers and pedagogical staff seeking to personalise their use for HE environments. Additional refinements to the personalisation process occur during tokenisation,Footnote 1 where text is converted into smaller units in order to make the chatbots more efficient and effective (Bhartiya et al., 2019). Once trained and generating responses with a high level of reliability, developers continue to monitor the chatbot’s performance and provide iterative training to ensure that it is functioning smoothly and providing accurate information. Moreover, Tsivitanidou and Ioannou (2021) considers the potential of chatbots to support certain types of learning scenarios in HE.
In their guidelines, Chocarro et al. (2023) explain the desirability of empowering teaching and administrative staff to effectively use AI, even as its use continues to face questions regarding user digital literacy, ethics, data privacy, and how these tools impact current pedagogical strategies.
It is essential that a comprehensive artificial intelligence training program for university students and staff includes learning skills related to using, developing, and implementing chatbots. This inclusion is crucial at various levels:
-
Problem Identification: Universities should support the pedagogical empowerment and the AI acculturation of operational staff to identify opportunities and problems that can be facilitated using chatbots.
-
Theoretical/Practical Framework: Universities should provide assistance in the creation and development of chatbot-based assisted learning scenarios based on the needs of educators and learners.
-
Ubiquitousness: Universities should ensure that the use of chatbots is democratised, widespread, and reflective of the latest technological and training iterations.
-
Practical use: Universities should assist students in learning the skills necessary to seamlessly integrate chatbots into their learning environment.
-
Assessment: Universities should continuously assess and refine their chatbot training models following a holistic set of quality standards.
When contemplating the implications associated with the development and application of chatbots in educational settings, educators have the responsibility to consider the role biases play in their usage and development. For instance, tokenisation, a key aspect in data processing, requires careful attention to mitigate instances of information biases and to ensure that the chatbots maintain a pedagogical perspective. Ensuring that the tokenisation of data is executed without bias is imperative, as it directly impacts the effectiveness, accuracy, fairness, and equity of the educational experience as facilitated by the chatbot (Akter et al., 2021). Furthermore, educators must advocate for the availability of chatbot APIs capable of seamlessly functioning in different languages, thereby fostering inclusivity and accommodating linguistic diversity (Mogavi et al., 2023).
Finally, the shift in AI chatbots development APIs to no-code and low-code, especially OpenAI's GPTs in November 2022 (Lim et al., 2023), marks a significant milestone in democratising chatbot development for education. This shift has a notable impact on the development and use of chatbots as it allows individuals to create their own customised AI tools without having extensive knowledge of software development or programming. In the context of educational use cases, this removes many of the significant barriers preventing educators from leveraging their unique expertise and training in the creation of their own AI tools.
Responsible Use of Generative AI Tools in Academia
In November 2022, OpenAI made ChatGPT publicly available for free, employing the highly advanced GPT-3 model as its backbone Large Language model (LLM). By January 2023, OpenAI announced that ChatGPT had accumulated over 100 million users, setting a new global record as the fastest-growing application to date (Lim et al., 2023). The exponential growth of ChatGPT had a significant impact on the education sector, as students and educators around the world began exploring the app's novel functionalities. Delivered through an intuitive and user-friendly chatbot interface, ChatGPT's text translation, question-answering (Q&A), and text generation capabilities introduced new opportunities and challenges to modern learning and teaching values, norms, and methodologies. This development has been met with mixed reactions from the educational community, prompting educational institutions worldwide to establish ad hoc committees of experts tasked with revising their ethical frameworks, guidelines, and recommendations concerning the use of Generative AI (GAI) in education and pedagogy.
Russell Group universities provide a comprehensive framework dedicated to promoting the ethical and responsible utilisation of GAI tools within academic settings. These institutions are steadfast in their commitment to following established guidelines for the ethical use of AI tools in education, as outlined by the principles set forth by the Russell Group. This commitment involves fostering AI literacy among both students and staff and empowering educators as they guide students in the effective and responsible use of generative AI tools. Additionally, the Russell Group universities actively engage in reviewing and adapting curriculum, teaching methods, and assessment practices to seamlessly integrate the ethical use of generative AI and ensure equitable access for all. This commitment extends to upholding academic rigour and integrity, while also fostering collaboration with other institutions to share best practices in response to the evolving technological landscape and its educational applications.
Notes
- 1.
Tokenisation is the process of transforming a sequence of characters into a collection of distinct tokens. In the realm of computer science, tokens encompass a variety of elements, including words, integers, identifiers, special characters, and punctuation marks (Bhartiya et al., 2019).
References
Akcora, D. E., Belli, A., Berardi, M., Casola, S., Di Blas, N., Falletta, S., Faraotti, A., Lodi, L., Diaz, D. N., Paolini, P., Renzi, F., & Vannella, F. (2018, June 27–30). Conversational support for education. In Artificial intelligence in education: 19th International Conference. AIED 2018, London, UK, 2018, Proceedings, Part II 19 (pp. 14–19). Springer.
Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387.
Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62.
Bhartiya, N., Jangid, N., Jannu, S., Shukla, P., & Chapaneri, R. (2019, July). Artificial neural network based university chatbot system. In 2019 IEEE Bombay Section Signature Conference (IBSSC) (pp. 1–6). IEEE.
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38.
Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2023). Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 49(2), 295–313.
Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32(3), 444–452.
European Commission. (2019). Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Hamilton, M. L., Smith, L., & Worthington, K. (2009). Fitting the methodology with the research: An exploration of narrative, self-study and auto-ethnography. Studying Teacher Education, 4(1), 17–28. https://doi.org/10.1080/17425960801976321
Hwang, G.-J., & Chang, C.-Y. (2021). A review of opportunities and challenges of chatbots in education. Interactive Learning Environments. https://doi.org/10.1080/10494820.2021.1952615
Ingvarsdóttir, H., & Arnbjörnsdóttir, B. (2013). ELF and academic writing: A perspective from the expanding circle. Journal of English as a Lingua Franca, 2(1), 123–145.
Laux, J., Wachter, S., & Mittelstadt, B. (2023). Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, 18, 3–32.
Lee, U., Jung, H., Jeon, Y., Sohn, Y., Hwang, W., Moon, J., & Kim, H. (2023). Few-shot is enough: Exploring ChatGPT prompt engineering method for automatic question generation in English education. Education and Information Technologies, 1–33. https://doi.org/10.1007/s10639-023-12249-8
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790.
Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. MDPI AG. https://doi.org/10.3390/educsci13040410
Maher, S., Kayte, S., & Nimbhore, S. (2020). Chatbots & its techniques using AI: A review. International Journal for Research in Applied Science and Engineering Technology, 8(12), 503–508.
Mogavi, R. H., Deng, C., Kim, J. J., Zhou, P., Kwon, Y. D., Metwally, A. H. S., Tlili, A., Bassanelli, S., Bucchiarone, A., Gujar, S., Nacke, L. E., & Hui, P. (2023). ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Computers in Human Behavior: Artificial Humans, 100027. ISSN 2949-8821. https://doi.org/10.1016/j.chbah.2023.100027
Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Computers and Education: Artificial Intelligence, 2, 100033.
OpenAI. (2023). Enterprise privacy. OpenAI. Retrieved November 22, 2023 from https://openai.com/enterprise-privacy
Ouyang, F., Wu, M., Zheng, L., Zhang, L., & Jiao, P. (2023). Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. International Journal of Educational Technology in Higher Education, 20(1), 1–23.
Qadir, J. (2023). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. In 2023 IEEE global Engineering Education Conference (EDUCON) (pp. 1–9). IEEE.
Rath, S., Pattanayak, A., Tripathy, S., Priyadarshini, S. B. B., Tripathy, A., & Tanvi, S. (2023). Prediction of a novel Rule-Based Chatbot Approach (RCA) using natural language processing techniques. International Journal of Intelligent Systems and Applications in Engineering, 11(3), 318–325.
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1).
Sabzalieva, E., & Valentini, A. (2023). ChatGPT and artificial intelligence in higher education: Quick start guide. UNESCO.
Septiani, D. P., Kostakos, P., & Romero, M. (2023, July). Analysis of creative engagement in AI tools in education based on the# PPai6 framework. In International conference in methodologies and intelligent systems for technology enhanced learning (pp. 48–58). Springer.
Southworth, J., Migliaccio, K., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4, 100127.
Suhaili, S. M., Salim, N., & Jambli, M. N. (2021). Service chatbots: A systematic review. Expert Systems with Applications, 184, 115461.
Tsivitanidou, O., & Ioannou, A. (2021, July). Envisioned pedagogical uses of chatbots in higher education and perceived benefits and challenges. In International conference on human-computer interaction (pp. 230–250). Springer.
Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.
Younis, H. A., Ruhaiyem, N. I. R., Ghaban, W., Gazem, N. A., & Nasser, M. (2023). A systematic literature review on the applications of robots and natural language processing in education. Electronics, 12(13), 2864.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Romero, M., Reyes, J., Kostakos, P. (2024). Generative Artificial Intelligence in Higher Education. In: Urmeneta, A., Romero, M. (eds) Creative Applications of Artificial Intelligence in Education. Palgrave Studies in Creativity and Culture. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-55272-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-55272-4_10
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-55271-7
Online ISBN: 978-3-031-55272-4
eBook Packages: EducationEducation (R0)