Abstract
Since OpenAI released ChatGPT, the discussion on its usage in education has been conducted by students and teachers of every education level. Also, many studies have been performed on the tool’s possibilities and the threats related to its usage, such as incomplete or inaccurate information obtained or even plagiarism. Many universities worldwide have introduced specific regulations on ChatGPT usage in academic work. Furthermore, research on using ChatGPT by students and their attitudes towards it has appeared. However, a research gap exists in higher education teachers’ acceptance of AI solutions. The goal of this research was to explore the level of acceptance of the usage of ChatGPT by academics in Poland, as well as point out factors influencing their intention to use this tool. The study motivation was related to an ongoing academic discussion mainly focusing on the disadvantages of AI solutions used in scientific work and the willingness to fill the gap by showing teachers’ attitudes toward AI. The data was collected online by inviting academic teachers from Polish public universities to complete the prepared survey. The survey was prepared using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model extended with Personal Innovativeness. It revealed the acceptance level of ChatGPT usage in Polish universities by teachers and researchers and the antecedents influencing willingness to use this technology in academic work. The paper contributes to the theory of AI usage by structuring the studies regarding ChatGPT application for teaching and research, and provides practical recommendations on ChatGPT adoption in the work of academics.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The research laboratory OpenAI presented ChatGPT in November 2022 (OpenAI, 2023). Yet, only a year after its launch, it has caused an enormous revolution in many areas of our life. This tool, available to users free of charge, has gained immense popularity, gathering one million subscribers in the first few days of operation (De Angelis et al., 2023; Dowling & Lucey, 2023), a phenomenon that no other online platform has achieved so far. ChatGPT belongs to the family of conversational agents (Car et al., 2020), referred to as chatbots. Its capabilities, however, significantly exceed those of previously known chatbots. ChatGPT is a large language model (LLM) built on the GPT framework (Cascella et al., 2023). The current model, accessible freely, runs on the GPT-3.5 architecture (Burger et al., 2023). There is also an upgraded GPT-4 model which operates in subscription model available from March 2023 (De Angelis et al., 2023) and which allows you to provide images as input data (Zhang et al., 2023), while the earlier free version only allows its users to work with text and numeric data. The free ChatGPT model and the paid version allow for such operations as text summary, answering questions, translation, sentiment analysis, text completion, or data classification (Zhang et al., 2023).
The introduction of ChatGPT had numerous implications for artificial intelligence (AI) in science and business. Although tools using AI, including natural language processing, have been used for years in everyday life, it was only the launch of ChatGPT that drew the attention of the public and scientists to the opportunities and threats resulting from the use of AI in work, business, academia, and science. AI has been implemented in email systems to filter SPAM messages (Ahmed et al., 2022), and in 2019 Google implemented the BERT algorithm in its search engine to support natural language understanding in user queries (Buche, 2020). However, such a technologically advanced AI tool has never been made available for such a broad audience, nor has it been emphasized in the discourse that artificial intelligence is responsible for its functionalities. Some researchers have even announced that these tools may cause a revolution in science and the academic community (Haque et al., 2022; Lund et al., 2023).
ChatGPT public release have also resulted in a very dynamic development of other models and software based on AI. The race of technology companies that work on AI solutions has begun (Zhang et al., 2023). And so, right after the launch of ChatGPT, Microsoft invested in its development and implemented ChatGPT in the Bing search engine. In March 2023, Google published its Bard chatbot as a direct response to ChatGPT. Since the beginning of 2023, numerous solutions supporting work have been created incredibly fast. ChatGPT and other language models are used as copilot tools in these new solutions. Attention was also paid to other developments, such as the DALL-E algorithm (Lund et al., 2023; Reddy et al., 2021), based on ChatGPT-3, or a Midjourney tool, which allows to generate images based on a text description.
The emergence of ChatGPT has sparked numerous debates in the academic community, including discussions about its capabilities, the opportunities and challenges it creates, with more publications focusing on the perspective of students than the university employees (Emenike & Emenike, 2023). In the course of literature research, we found that there are currently no studies on the acceptance of this technology in the academic environment. This study aims to fill the identified gap by examining the acceptance of this technology among the academic staff of Polish universities. The cross-sectional approach records the attitudes of Polish higher education faculty members toward the use of ChatGPT at a specific point in time: in the period between April 25 and May 25, 2023, providing a snapshot of the attitudes of the study population. Data was collected during the first half-year of the application’s operation, representing the early attitudinal phase and identifying key variables for tool acceptance.
It should be noted that we did not investigate the types of activities that scientists perform using ChatGPT. This decision was conditioned by the fact that at such an early stage of ChatGPT’s availability, there were numerous and heated discussions about the legality and ethics of using it in research and didactics due to its tendency to hallucinate. As university employees, we know that the role of an academic employee combines the responsibilities of a researcher conducting experiments and writing research papers, and a teacher working on the curriculum and tasks for students. Therefore, in the rest of the paper, when referring to the tasks of an academic teacher, we mean both teaching and research work. We do not focus on the experience of scientists in working with this tool in specific fields of activity (research work, preparation of teaching materials). The aim of the article is to examine the attitude of scientists in Poland towards the use of ChatGPT for professional purposes. To achieve the objective, the authors have set one research question (RQ): What factors affect the academics’ intention to use such artificial intelligence tool as ChatGPT?
Answering this research question, this paper contributes both to the theory and practice of AI usage in higher education. The review of literature has resulted in the presentation of structured knowledge regarding good and bad practices of ChatGPT application by academics in their research and didactic work. It provides a realistic view on this tool, showing that with the proper control over the content it generates, ChatGPT might be a helpful assistant. From the perspective of practice, we provide recommendations for the academics on how to introduce ChatGPT into their work routine, gradually getting familiar with it and exploring all the possibilities of work (both research and didactic) it can provide.
2 Literature review
Chatbots such as ChatGPT or Bard generate texts or continuing statements based on queries such as a prompt or a seed text (Burger et al., 2023). These language models can also change the tone of voice of generated answers or input texts, e.g., to a more formal, scientific, or business one, thus adapting it to the questioner’s needs. They can also perform an organizational function and be used to create summaries of email conversations in a thread, and, thanks to integration with other tools, even audio or video meetings. Programmers, in turn, willingly use them as support in writing, verifying, and processing codes.
In education and science, however, the most valuable functions are the ability to summarize scientific papers, write fragments or entire papers, and prepare teaching materials. Below, we consider the possible applications of ChatGPT in supporting academics in their teaching and research tasks, focusing on three major fields: the preparation of educational materials, the preparation of scientific research, and text writing and correction.
In the field of educational materials preparation, the researchers point out the opportunities that the usage of ChatGPT brings to academics. The authors often mention the creation of auxiliary and supplementary materials (Emenike & Emenike, 2023; Farrokhnia et al., 2023; Lim et al., 2023), such as quizzes (Cooper, 2023; Farrokhnia et al., 2023) or flashcards (Khan et al., 2023), development of tests and exams sheets (Cotton et al., 2024; Ivanov & Soliman, 2023), and generating incorrect answers implemented to tests (Emenike & Emenike, 2023). This tool can also be used in materials translation or summaries creation (Emenike & Emenike, 2023; Khan et al., 2023), or in finding information on a particular topic (Farrokhnia et al., 2023). The potential of ChatGPT to generate ideas for lesson plans (Farrokhnia et al., 2023; Khan et al., 2023), as well as to develop course descriptions (Emenike & Emenike, 2023; Ivanov & Soliman, 2023), including the use of specific teaching methods or models (Cooper, 2023), is also emphasized in the research works. Finally, ChatGPT is considered helpful in evaluating students’ works (Cotton et al., 2024; Farrokhnia et al., 2023;Ivanov & Soliman, 2023 ; Khan et al., 2023), since the LLM can analyze both the linguistic correctness and clarity of the text (Ivanov & Soliman, 2023; Khan et al., 2023). It can also provide feedback on students’ work (Farrokhnia et al., 2023) and responses to emails and announcements addressed to students (Emenike & Emenike, 2023). As potential disadvantages of AI usage in educational materials preparation, the threats of plagiarism, content copying, and lack of creativity are mentioned most frequently (Choi et al., 2023; Cotton et al., 2024).
In preparing scientific research, the discussion on ChatGPT usage is rather heated. On the one hand, researchers emphasize that some academics may lack the ability to use AI tools and thus feel reluctant or even be afraid to use them (Burger et al., 2023). It was also observed that ChatGPT could create untrue content with a high degree of credibility (Cascella et al., 2023), provide non-existent evidence (Ariyaratne et al., 2023; De Angelis et al., 2023), add non-existing bibliographic information to support the veracity of the cited paper (Day, 2023; De Angelis et al., 2023; Macdonald et al., 2023), or build false references, using real names of journals, and credible-sounding titles, which can be hard to detect (De Angelis et al., 2023). Furthermore, ChatGPT doesn’t understand the context of statements (Farrokhnia et al., 2023) and cannot answer more abstract questions, which was confirmed by its creators (OpenAI, 2023). It may also introduce certain simplifications in data analysis (Burger et al., 2023). ChatGPT is also incapable of deduction, has limited mathematical skills (Frieder et al., 2023), and does not assess data reliability well (Farrokhnia et al., 2023). Since its knowledge is limited to data collected until specific time in the past (Zielinski et al., 2023), it may contain errors (Burger et al., 2023; Carvalho & Ivanov, 2024; Cascella et al., 2023).
On the other hand, the researchers notice the opportunities offered by using AI in scientific work, such as improving its effectiveness, relevance, and timeliness, due to the possibility of keeping up with trends and recently published works (De Angelis et al., 2023; Zhang et al., 2023). AI can improve research methods (Burger et al., 2023) by selecting appropriate statistical tests and generating codes adequate for data analysis (Macdonald et al., 2023), identifying knowledge gaps, supporting data organization by generating tables or graphs, explaining the results, identifying patterns and trends, suggesting how to interpret the results, and checking the consistency of the results (Cheng et al., 2023). Its research suggestions can also go beyond the perspectives of individual researchers (Ivanov & Soliman, 2023), which can help to eliminate the problem of bias in the interpretation of results (Burger et al., 2023). What was also pointed out is that the AI could act as Research Assistant (Dowling & Lucey, 2023; Ivanov & Soliman, 2023), providing help with identifying research trends in the declared field of science (Heaven, 2018), showing trends in grants to find funding opportunities for the scientific projects or extracting information directly from scientific works (Gusenbauer, 2023).
In text writing and correction, it is proven that tools such as ChatGPT can quickly write a text that will sound academic (Lim et al., 2023), support writing by generating titles, abstracts, paraphrases (Isaeva, 2022), introduce a better order of scientific literature and correct text structure, which could contribute to easier search and analysis of existing literature (Arif et al., 2023), as well as to its summarization (Arif et al., 2023; Gao et al., 2023). ChatGPT can also save time for the researcher and editors by supporting them in creating metadata and indexing (Lund et al., 2023), and also by preparing the text to be understandable to the public by simplifying the language (Cascella et al., 2023). In addition, it can support authors in providing publications that meet various journal guidelines by adapting their formatting (Lund et al., 2023). Such tasks as describing the method and results of the study (Macdonald et al., 2023), editing the text and verifying its clarity (Cheng et al., 2023; Macdonald et al., 2023), suggesting alternative wording or translations (Lund et al., 2023), pointing out inconsistencies in the text, or giving examples of well-written chapters (Cheng et al., 2023), are suitable for the AI used in academic work.
However, it is the role of the researcher to accept the ideas, abstracts, or text formatting suggested by ChatGPT. The researcher is also responsible for correcting and verifying the generated text. Models do not have the knowledge or experience needed to communicate scientific concepts properly or verify the credibility of information (Wittmann, 2023). Those issues, commented on broadly in the literature, may be considered a starting point for the research conducted and presented in this paper on the acceptance of AI usage by Polish university faculty members.
3 Methodology
The widespread adoption of technology in our daily life has caused a rapidly growing interest in understanding the dynamics of its acceptance and use. “The Unified Theory of Acceptance and Use of Technology” (UTAUT), established in 2003 by Venkatesh et al., offers a well-regarded framework for interpreting such behavior. The model aggregates user experiences and integrates concepts that form the theoretical basis of acceptance of information systems by users (Yu et al., 2021). It comprises the key elements including “Performance Expectancy”, “Social Influence”, “Effort Expectancy”, and “Facilitating Conditions”, which all significantly shape an individual’s intention to use a certain technology. The model also identifies gender, age, and experience as crucial moderators.
An enhanced version, UTAUT2, was subsequently proposed by Venkatesh et al. in 2012, incorporating three additional elements: “Hedonic Motivation”, “Price Value”, and “Habit”. This refined model, developed through robust empirical research, is vital for comprehending what drives the adoption and usage of emergent technologies in various contexts (Tamilmani et al., 2021). UTAUT2 has been employed effectively in university and academia settings to recognize the factors influencing the intentions of higher education workers to employ various technological instruments, like online learning platforms (Samsudeen & Mohamed, 2019), mobile applications (S. Hu et al., 2020) and software for LMS (Raman & Don, 2013; Raza et al., 2022; Zwain, 2019). Recent research has further explored how these factors are influenced by specific contexts, like the recent COVID-19 period (Edumadze et al., 2023; Osei et al., 2022), enriching the procedure of designing and implementing research and education technology tools.
Our research postulates that the seven constructs of UTAUT2 – “Performance Expectancy”, “Effort Expectancy”, “Social Influence”, “Facilitating Conditions”, “Hedonic Motivation”, “Price Value”, and “Habit” – bear significant influence on the “Behavioral Intention” of researchers to utilize ChatGPT technology in academic work. We endeavor to broaden the scope of the well-grounded UTAUT2 framework by incorporating “Personal Innovativeness” (PI) as an influencing factor of behavioral intention toward ChatGPT usage. Personal Innovativeness, an individual’s predisposition towards exploring and adopting innovative IT developments independently, is acknowledged as an impactful element in technology adoption (Agarwal & Prasad, 1998). Many researchers affirm the substantial role of personality traits like PI in technology adoption, especially within the IT (Sitar-Taut & Mican, 2021). This characteristic is viewed as stable, context-specific, and a potent influence on the acceptance and adoption of IT (Strzelecki, 2024; Twum et al., 2022).
3.1 Hypotheses development
This study attempts to investigate the effects of UTAUT2 variables, along with PI on the behavioral intention of researchers to utilize generative AI in the form ChatGPT and assess the utility in facilitating scholarly pursuits. The objective includes not only defining these factors but also probing how researchers’ views of ChatGPT usage influence its sustained application.
Performance Expectancy (PE) stands as a significant determinant in individuals’ behavioral intention toward new technology adoption. This term refers to the perceived usefulness or the degree to which individuals believe that utilizing a system will enhance their job or learning performance (Venkatesh et al., 2003). PE has emerged as a crucial component in studying the application of novel technologies. In the educational context, El-Masri and Tarhini (2017) emphasized the substantial, direct and positive role PE plays in the adoption of educational systems. Empirical evidence of PE’s significant influence on the “Behavioral intention” of academics to embrace innovative educational tools like Google Classroom (Kumar & Bervell, 2019), virtual learning environment (Gunasinghe & Nanayakkara, 2021) and LMS (Raman & Don, 2013), has been well documented.
H1: “Performance Expectancy is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
Effort Expectancy (EE), as characterized by Venkatesh et al. (2003), signifies the perceived ease or effort involved in employing technology. It encompasses constructs such as Perceived Ease of Use and Complexity. EE has been identified as an essential determinant in technology acceptance, exerting a direct effect on the “Behavioral intention” towards technology usage. Recent research confirms the positive, direct and significant role of EE in shaping “Behavioral Intention” in various university contexts, including the adoption of mobile technology (S. Hu et al., 2020), software engineering tools (Wrycza et al., 2017) and software for LMS (Raza et al., 2022).
H2: “Effort Expectancy is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
Social Influence (SI) is the degree to which important people, including relatives, peers, etc., think a person should utilize specific technology (Venkatesh et al., 2003). The impact of such social networks is observed to enhance users’ intention to employ technology. Often referred to as a social or subjective norm in prior studies, SI stands as a statistically significant, direct and positive determinant in shaping users’ “Behavioral Intention” toward specific technology usage. This is illustrated in various academic contexts, such as the adoption of MOOCs (Tseng et al., 2022), ICT acceptance (Oye et al., 2014) and LMS (Raman & Don, 2013).
H3: “Social Influence is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
Facilitating Conditions (FC), according to Venkatesh et al. (2003), denote the degree of accessibility of necessary resources and support to accomplish a task. This construct has been extensively researched in technology adoption, highlighting its pivotal role across various IT fields. Within university contexts, FC underscores the importance of having reliable technical infrastructure, knowledge resources, library access, and IT tools, factors that can influence academics’ inclination to use them for enhancing their work. Numerous studies have acknowledged FC as a significant, direct and positive predictor of “Behavioral Intention” and “Use Behavior” among academics. It’s also one of the biggest determinants of how much a person uses technology. FC’s crucial role has been observed in engagement with academic virtual communities (Nistor et al., 2014) and the application of communication and collaboration tools among academic staff (Maican et al., 2019).
H4: “Facilitating Conditions is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
H5: “Facilitating Conditions is having a positive direct influence on the ChatGPT Use Behavior in academic work”.
The enjoyment or pleasure of technology use is known as hedonic motivation (HM), and it has a big influence on users’ intentions (Venkatesh et al., 2012). Existing research suggests an increased likelihood of continued technology use if users derive enjoyment from it. In the realm of information systems, Hedonic Motivation has been observed to directly and positively impact technology usage (Tamilmani, Rana, Prakasam, & Dwivedi, 2019b). Thus, perceiving a system as enjoyable and entertaining typically encourages its adoption and use. In the university setting, HM emerges as a key predictor of behavioral intention, particularly in relation to technology implementation. Its significance is well-documented in university contexts like MOOC adoption (Meet et al., 2022), and LMS (Raman & Don, 2013). Thus, the following is suggested:
H6: “Hedonic Motivation is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
Venkatesh et al. (2012) claim that Price Value (PV) is a person’s perception of the trade-off between the financial cost of using a system and its benefits. As a crucial determinant of “Behavioral Intention” toward technology usage, PV substantially influences the decision to adopt new technology (Tamilmani et al., 2018). Numerous studies affirm the significant positive and direct effect PV has on the “Behavioral Intention” toward adopting technologies like elearning (Mehta et al., 2019) and mobile learning (Azizi et al., 2020). Some research has reframed PV as “Learning Value”, representing the perceived worthwhile nature of time and effort invested into learning (Ain et al., 2016; Dajani & Abu Hegleh, 2019; Farooq et al., 2017). This construct impacts academics’ intention to leverage new technology for scholarly endeavors (Zwain, 2019). Hence, a hypothesis is put forward:
H7: “Price Value is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
Habit (HT) is the degree to which a person is predisposed to carry out behaviors immediately, given their previous knowledge and familiarity with the technology. Limayem et al. (2007) and Venkatesh et al. (2012) conceptualize HT as a perceptual variable that has been identified as a important, direct and positive forecaster of “Behavioral Intention” and “Use behavior” (Tamilmani, Rana, & Dwivedi, 2019a). Further, HT has been observed to impact positively academics’ “Behavioral Intention” toward technology use, specifically within teaching and learning processes (Al-Mamary, 2022), elearning platform use (Zacharis & Nikolopoulou, 2022), and application of platforms like Google Classroom (Alotumi, 2022).
H8: “Habit is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.
H9: “Habit is having a positive direct influence on the ChatGPT Use Behavior in academic work.”
The literature abounds with thorough examinations of the link between Personal Innovativeness (PI) and adoption along with the use of technology in the IT sector (Slade et al., 2015). PI is defined as a predisposition to embrace cutting-edge technological innovations, exhibiting an inclination for risk-taking associated with trialing new IT advancements (Farooq et al., 2017). Studies have incorporated PI into the UTAUT2 model within the context of university work, investigating the adoption of elearning platforms (Twum et al., 2022), animation usage among university students (Dajani & Abu Hegleh, 2019), and distance learning in the time pandemic (Sitar-Taut & Mican, 2021).
H10: “Personal Innovativeness is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work.”
Behavioral Intention (BI) has become a cornerstone in investigating technology adoption and utilization behaviors (Park et al., 2012). This concept captures the individual’s readiness and intent to employ a specific technology for a given work (Venkatesh et al., 2003, 2012). Previous studies affirm BI’s positive and direct role in shaping actual technology usage (Aldossari & Sidorova, 2020; Gansser & Reich, 2021). The UTAUT2 model posits that seven factors can affect BI. In our research context, we probe BI to comprehend academics’ inclination to employ ChatGPT in their professional duties. A thorough exploration of the link between BI and actual “Use Behavior” offers valuable insights into the elements fostering or impeding technology adoption within the university landscape.
H11: “Behavioral Intention is having a positive direct influence on the ChatGPT Use Behavior in academic work.”
Use Behavior (UB) explores the understanding of user acceptance and usage patterns of technology. Using technology for a specific purpose is what is meant to be indicated by actual Use Behavior, according to the model. The effectiveness of the UTAUT2 model in forecasting actual UB across various contexts is supported by empirical research, including applications in e-learning platforms (Zacharis & Nikolopoulou, 2022) and mobile education (Arain et al., 2019). However, Venkatesh et al. (2012) fell short of specifying how actual use was assessed. ChatGPT usage in this research is quantified using a seven-point scale ranging from “never” to “several times a day”.
Demographics play a significant role in shaping users’ acceptance of new products or technologies (Mustafa & Zhang, 2022). This study aims to investigate the moderating influence of various demographic factors on the model such as Gender and Age on hypotheses 1 to 9. The main area of research investigating the impact of demographics has been centered around the adoption of novel technologies (Strzelecki & ElArabawy, 2024). Prior research (Hu et al., 2020; Mustafa et al., 2022; Teo et al., 2012) has examined how different demographic factors, such as gender, age, occupation, and experience, influence the adoption of new technologies. Consequently, it is imperative to examine the influence of these demographic factors as moderators in all the relationships established in the core UTAUT2 model. Thus, considering moderation, we offer the following hypotheses:
H12: “There is a moderating effect of Gender on Behavioral Intention to use ChatGPT in academic work.”
H13: “There is a moderating effect of Age on Behavioral Intention and ChatGPT Use Behavior in academic work.”
3.2 Model
According to Venkatesh et al. (2016), UTAUT2 should be the main model for developing hypotheses about the relationships between designated factors and technology adoption. Dwivedi et al. (2019) further stated that past studies often underused UTAUT2, frequently overlooking moderators. This research responds to that gap by using a tailored version of UTAUT2, considering as moderators only age and gender.
This research used a model that included all variables from the widely used UTAUT2 scale to measure acceptance of technology (see Fig. 1). We improved the model by including the PI from the research of Agarwal and Prasad (1998) and used two moderating variables – age and gender. Data collection employed a seven-point Likert scale, ranging from “strongly disagree” to “strongly agree”; and usage behavior was gauged on a seven-point scale, from “never” to “several times a day”. Table 1 contains the descriptive statistics and the measurement scale.
3.3 Sample characteristics
Each construct in the study satisfied the reliability and validity standards, and the discriminant validity was verified, with the scale tested recently in a study of ChatGPT use among students (Strzelecki, 2024). Hair et al. (2022) stated, that research employing the PLS-SEM method requires at least of 189 samples in order to show R2 values of a minimum 0.1 at a 5% level of significance. Furthermore, a statistical power of no less than 95% is typically desired in social science research, as proposed by Arnold (1990).
At the end of 2021, the number of academics employed by universities in Poland stood at 99,950 (RAD-on, 2023). To establish the sample size from a population of 99,950, with a 95% confidence level and a 5% margin of error, the Yamane’s (1967) formula “n = (z^2 * p * (1-p)) / e^2” was utilized. In the formula, “n” is equivalent to the sample size, “z” corresponds to the z-score related to the confidence level (1.96 for a 95% confidence level), “p” denotes the estimated percentage of the population with the desired characteristic (for which we used 0.5 to derive the maximum sample size), and “e” represents the error margin of (0.05). Consequently, the minimum sample size calculated was 385.
The study utilized a web survey constructed on Google Docs, which was disseminated to ten major Polish universities: the University of Adam Mickiewicz in Poznan, the University of Lodz, the University of Gdansk, Jan Kochanowski University in Kielce, the University of Warsaw, the University of Rzeszow, Jagiellonian University, Nicolaus Copernicus University in Torun, the University of Zielona Góra, and the University of Szczecin. In the period between April 25 and May 25, 2023, academics at these institutions were asked to take part in the survey through email.
The confidentiality of their answers and their free will involvement were guaranteed to the participants. After eliminating eight responses with zero variance, a total of 629 valid responses were compiled, satisfying the requirement of minimum sample size. The demographic distribution of the sample was 308 males (49.0%), 290 females (46.1%), and 31 (4.9%) who wished to remain anonymous about their gender. The participants’ average age was 45.3 years (SD = 11.36), with a median of 45 years.
4 Results
Using path weighting with default initial weights and a 3000 iteration limit, we carried out the model estimation using the PLS-SEM algorithm in SmartPLS 4 software (Ringle et al., 2022). Using 5000 samples and a nonparametric process called bootstrapping, the statistical significance of the results was computed. Indicators with loadings over 0.7, indicating more than 50% variance explained by the construct, were deemed to have acceptable item reliability, except for FC4, which was eliminated (Table 1).
We evaluated reliability through composite reliability, with scores between 0.70 and 0.95 demonstrating good to acceptable reliability (Hair et al., 2022). We also evaluated internal consistency using Cronbach’s alpha, with similar thresholds to composite reliability. Additionally, we calculated the Dijkstra and Henseler’s reliability coefficient (ρA) as an accurate alternative (Dijkstra, 2014; Dijkstra & Henseler, 2015). The convergent validity of the measurement models was evaluated by computing the average variance extracted (AVE) for each reflective variable for all connected items, with an AVE threshold of 0.50 or greater accepted (Sarstedt et al., 2022). The quality criteria were met by all measurements (see Table 2).
For the discriminant validity analysis of PLS-SEM, we employed the heterotrait-monotrait ratio of correlations (HTMT) method, suggested by Henseler et al. (2015). Typically, an HTMT threshold of 0.90 is preferred for conceptually similar constructs, whereas a lower limit of 0.85 is used when constructs are more distinct. In this study, all HTMT values, as shown in Table 3, are comfortably below the 0.85 threshold, thereby indicating strong discriminant validity.
In the subsequent phase, we examined the R2 that evaluates the percentage of variance in each construct explained by the model. The R2 scale extends from 0 to 1, with values nearer to 1 indicating stronger explanatory power. Hair et al. (2011) provided a general guide indicating R2 results of 0.25, 0.50, and 0.75 as signifying, respectively, weak, moderate, and strong explanatory power,.
The findings from the PLS-SEM, depicted in Fig. 2, illustrate standardized regression coefficients for path relations and R2 values within the squares of variables. Out of eleven hypotheses, nine were supported. Predominantly, HT (Habit) has the highest impact on Behavior Intention (H8: β= 0.361, p < 0.01), trailed by Performance Expectancy (H1: β = 0.351, p < 0.01) and Hedonic Motivation (H6: β = 0.199, p < 0.01), accounting for 74.4% of BI variance. Though Social Influence (H3: β = 0.083, p < 0.01), Personal Innovativeness (H10: β = 0.087, p < 0.01), and Price Value (H7: β = 0.048, p < 0.05) have a positive influence on BI, the effect remains minimal. On the contrary, BI significantly influences Use Behavior (H11: β = 0.436, p < 0.01), with Facilitating Conditions (H5: β = 0.282, p < 0.01) and HT (H9: β = 0.154; p < 0.01) also contributing, collectively explaining 50.2% of UB variance. The path coefficients, significance tests, and hypothesis confirmations for the structural model are detailed in Table 4.
In the case of moderating variables of “Age” and “Gender”, the results show that only one moderating effect is statistically significant in the model. Age significantly moderates the path between “Price Value” and “Behavioral Intention” (β = − 0.059, p < 0.05). Other moderating effects are not significant. Moderating effects of “Age” and “Gender” are presented in Table 5.
Price Value (PV) is understood as a user’s perceived trade-off between the advantages of using an AI-tool and the cost of this tool. It is also referred to as the perceived time and effort invested into learning. Since the influence of PV on the Behavioral Intention (BI) of the academics to use ChatGPT is also affected by their age, we can assume that with age, which results in more work experience, the academics may tend to be more conscious about spending their financial and personal resources on learning new tools and technologies to apply in their work process.
5 Discussion
The study, presented in this paper, was undertaken to examine the factors affecting the adoption and usage of the artificial intelligence tool, which is ChatGPT. The examination was conducted with the use of the adapted UTAUT2 model, with eight constructs. The results offer valuable insights into the impact of AI-powered tools on the educational and research processes, performed by academics.
The authors have put forward eleven hypotheses about UTAUT2 constructs: eight of them referring to the effect on academics’ BI to use ChatGPT, and three of them - to the influence on ChatGPT Use Behavior (UB). Only two hypotheses were rejected, and nine hypotheses were accepted after the study results were analyzed. It appears that Effort Expectancy does not have direct influence on BI of academics to use ChatGPT. Thus, it can be claimed that the academics are not afraid of putting more effort into employing a technology - in case they consider it valuable for their work. This results is similar with Zacharis and Nikolopoulou (2022) and recent research on adoption and use of ChatGPT by students by Strzelecki and ElArabawy (2024), where EE was also found to be not significant.
At the same time, Facilitating Conditions have no direct influence on BI. As discussed above, FC means the degree of accessibility of the resources and support for completing a certain task with the AI tool. Therefore, the previous statement is supported - the academics will not choose a technology to work with only by the characteristic of its ease of use or accessibility. It can be suggested that they will put their focus on the effects of applying this tool. This outcome is in line with previous research of Alowayr (2022) and study of students’ acceptance of ChatGPT in higher education (Strzelecki, 2024), where FC was found to be not confirmed.
The nine accepted hypotheses provide us with the following conclusions. The most influential factor in determining BI is the Habit. The positive and significant effect of HT on BI is consistent with previous research on technology usage in general (Tamilmani, Rana, & Dwivedi, 2019a), and in the teaching and learning process (Al-Mamary, 2022; Alotumi, 2022; Zacharis & Nikolopoulou, 2022). This means that for academics the frequency of using ChatGPT for work will be growing as they get more familiar with it and as they feel that it is the same common thing for them as, for instance, using a search engine to find information or an online translator to work with foreign languages.
The second most influential factor for BI is the Performance Expectancy. PE was proved to have significant influence on teachers to embrace new education tools like Google Classroom (Kumar & Bervell, 2019). In our case, Performance Expectancy influences the academics to embrace the tool that can assist them both in research and teaching. They will be much eager to use ChatGPT when they are sure that this will result in their work improvement.
Further, there is Hedonic Motivation that positively affects BI. Although, it is necessary to add that its influence is less significant compared to the previous two constructs (β = 0.199). A direct impact of HM in BI was observed for platforms and learning management systems (Raman & Don, 2013; Tseng et al., 2022). HM is not connected with the actual effects of using ChatGPT, but with the feeling of pleasure and enjoyment one may feel when working with this tool and obtaining valuable results.
The positive effect on BI, of almost the same significance, is observed for Personal Innovativeness (PI) and Social Influence (SI) - path coefficient 0.087 and 0.083, respectively. Positive influence of SI on technology acceptance (Oye et al., 2014) and learning tools acceptance (Raman & Don, 2013; Tseng et al., 2022) was proved in previous studies. Effects of PI were proved, for instance, in (Twum et al., 2022) and (Sitar-Taut & Mican, 2021). Personal Innovativeness is the construct influenced by the academics themselves - how eager they are to engage in working with technological innovations and take certain risks connected with it. Social Influence, on the contrary, includes the attitude of the academics’ environment (family, friends, colleagues, etc.) toward the analyzed technology. The more actively the environment convinces an academic worker to use ChatGPT, the more likely they are to use it. However, neither SI nor the readiness to embrace new technologies (PI) are the decisive factors for the BI of an academic to use this AI tool.
The last construct to influence BI positively, yet not significantly (β = 0.047), is the PV. PV refers to a person’s perceived compromise between the advantages of using a technology and its monetary value. Previous research proved PV’s positive influence on BI to adopt new technologies (Tamilmani et al., 2018) and, in particular, learning technologies (Azizi et al., 2020; Mehta et al., 2019). This research shows that PV will not be the first reason why the academics would intend to use or not to use ChatGPT. We may suggest that the reason for that (or one of them) is the fact that ChatGPT offers a free version, and its functionality is quite large.
The authors have put forward three hypotheses about the UB. The most influential factor in determining the UB in connection with ChatGPT is the Behavioral Intention to use this tool (β = 0.436). Use Behavior denotes the degree to which a user engages with a certain technology to perform a task, while the user’s Behavioral Intention shows their readiness and intent to employ this technology for the task. Therefore, the higher the intent to use ChatGPT, the more engaged the user will be in working with the technology. A simple suggestion comes up - the academics will only use the AI tool actively when they feel a certain high level of readiness and willingness to use it. And this willingness (which is BI) will be gained under the influence of other factors (as discussed above). The crucial role of BI in technology usage was highlighted by many researchers (Aldossari & Sidorova, 2020; Gansser & Reich, 2021).
Finally, in addition to BI, a positive and quite significant direct effect on UB have the FC (β = 0.282) and HT (β = 0.154). As mentioned above, HT has a significant influence on BI. Yet, FC only affects UB directly. Thus, the degree of accessibility of the resources and support for AI tool application (which is FC) will not affect the readiness of the academic to use ChatGPT, yet may affect the level of their engagement into using it.
Additionally, when analyzing the moderating variables – Age and Gender of the respondents, we revealed that the only significant moderating effect is that of Age on the path between PV and BI. The effect is negative (β = −0.059), which means that with age the respondents are less eager to pay for using ChatGPT, even knowing that they would be investing into the tool that might significantly facilitate their didactic and research work. We could suggest a few explanations of this phenomenon. First, as we discussed before, is the factor of general consciousness and caution that develops over time and that makes people be more careful with such decisions as spending money. Second, academics of older age may tend to oppose new technologies much more than their younger colleagues, preferring conventional tools. In this case, even if the academics give ChatGPT a try, they would not be ready to invest into its paid version. Third, we would claim that with age the academics gather more knowledge, skills, and wisdom, both in research work and in teaching, so they may (and prefer to) rely on their experience more than on ChatGPT or any other AI tool.
As the final word in the discussion, we would like to refer to the work of Emenike and Emenike (2023), who brings up the issue of payable artificial intelligence tools. If (or rather, when) the systems based on AI will be available only with paid subscriptions, there is no doubt that some educational institutions will be eager to bear such costs to provide their workers with the best tools. And while some institutions will be able to afford such investment, there will be some that will have no funds for that. The issue of equity and accessibility will then arise. And this might lead to a new wave of research dedicated to opportunities of AI-based tools application.
5.1 Contributions
The findings of this study have contributions to theory and practice. From a practical perspective, this research offers valuable insights into the acceptance of an artificial intelligence conversational agent for teaching and research purposes. The discoveries deepen our understanding of the crucial factors that influence the adoption and integration of ChatGPT at higher education institutions.
The results indicate that the factors of HT and PE play the most crucial role in shaping academics’ (teachers and researchers) intentions to accept and utilize ChatGPT. In order to start implementing this tool in their work more frequently, the academics need to get accustomed to using ChatGPT often, and start accepting it as something easy and familiar, but also as something useful and helpful, that will facilitate their work and improve its results. Once the academics start not only to feel more confident with ChatGPT, but to see the effects of their cooperation with this tool, they will grow to enjoy it. This is where HM will also positively affect their intention to use ChatGPT in future.
We would also like to pay attention to the effect that Personal Innovativeness may have on the behavioral intention of the academics. Taking a risk and experimenting with new technologies like AI might be simply interesting and exciting. In addition, when it brings valuable results and proves to be a helpful tool to facilitate work – it may become a good habit to use ChatGPT as an assistant.
Moreover, what requires highlighting is the SI factor. People usually tend to look after their friends, colleagues, or relatives that they love and respect. Once an academic reveals the advantages and disadvantages of using ChatGPT, they should share their knowledge and experience with their colleagues. It is very important that when forming their opinion about ChatGPT (as well as any other tool) the academics pay consider both good and bad practices, not focusing only on someone’s bad experience.
The theoretical contribution of this study consists, first of all, in the conducted review of literature. It assists in structuring the knowledge about the usage of ChatGPT in preparation of educational materials, in scientific research, and in text writing - all for the purposes of academics at higher education institutions. We believe that our paper may also be an example of SI and might help some academics get more familiar with what ChatGPT offers for education. Together with presenting good examples of ChatGPT helping in research and didactics, the paper emphasizes the importance of taking caution with all the content that this AI tool generates and verifying all the texts and other materials before using them further. Finally, the paper is an example of the UTAUT2 model application for analyzing the acceptance of artificial intelligence technology.
5.2 Limitations
The authors see two limitations of this research. The first relates to the issue of awareness. In the questionnaire we were asking the academics about their attitude toward ChatGPT, assuming that they are familiar with the tool and have tried it at least once. Yet, it turned out that not all the academics have tried ChatGPT, due to different reasons: they have not needed it for their work (the issue of the academics’ work area is also discussed as the second limitation), they have not been recommended to use it, they have not had time to try it yet, they have been skeptical about it. Due to this fact, the respondents were not confident about their attitude toward this AI tool. And because of that they either preferred not to fill the questionnaire at all, or provided the opinions about ChatGPT which may not be completely valid.
Secondly, there is a limitation of not distinguishing the academics by their areas of work. The authors did not divide the respondents into groups by the subjects they teach or the research areas they work in. Neither did we add a question about the research/teaching area into the questionnaire. Therefore, we could not explore, for instance, what is the difference in acceptance of ChatGPT between academics who teach languages and those who teach physics; between those who study artificial intelligence and those who conduct demographic research. Such comparative analysis would have provided us with more interesting conclusions about the utility of ChatGPT.
5.3 Future research
The first possible direction of research would be to explore academics’ opinions about AI not only through the UTAUT2 model, which presupposes specific questions, but also through the open answers of respondents. As much as it may be time-consuming to analyze such responses, they may contain some valuable insights about academics’ attitude toward ChatGPT and the reasons why they prefer to use or not to use this tool in their work.
The second possible avenue for studying the application of ChatGPT by academics would be to analyze the acceptance of this tool for teaching, and separately - for research; and then compare the results and draw conclusions about the utility of this AI tool for these two major roles of the academics.
Finally, the third direction for future research derives from the limitation of this study. As mentioned before, the authors did not analyze the areas in which the academics teach or conduct scientific research. Yet, a comparative analysis of the role of ChatGPT in assisting research and teaching in various scientific fields would be a prospective avenue for future research.
6 Conclusions
The objective of the paper was to explore the disposition of academics in Poland toward the use of ChatGPT for research and teaching purposes. The objective was achieved in the process of answering a research question. The factors that may influence the academics’ intention to use ChatGPT were analyzed based on the adapted UTAUT2 model. The model includes seven constructs (with Personal Innovativeness added as the eighth), for which the authors put forward eleven hypotheses, and nine of them were accepted. It was revealed that the BI of the academics to use ChatGPT is not influenced by the amount of effort they have to use to employ this technology, and by the accessibility of resources and support for working with ChatGPT. The rest of the constructs (HT, HM, PE, PI, PV, and SI) have a stronger or weaker effect on the intention of the academics to use ChatGPT and their engagement in applying this tool.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9(2), 204–215. https://doi.org/10.1287/isre.9.2.204
Ahmed, N., Amin, R., Aldabbas, H., Koundal, D., Alouffi, B., & Shah, T. (2022). Machine learning techniques for Spam detection in email and IoT platforms: Analysis and research challenges. Security and Communication Networks, 2022, 1–19. https://doi.org/10.1155/2022/1862888
Ain, N., Kaur, K., & Waheed, M. (2016). The influence of learning value on learning management system use. Information Development, 32(5), 1306–1321. https://doi.org/10.1177/0266666915597546
Al-Mamary, Y. H. S. (2022). Understanding the use of learning management systems by undergraduate university students using the UTAUT model: Credible evidence from Saudi Arabia. International Journal of Information Management Data Insights, 2(2), 100092. https://doi.org/10.1016/j.jjimei.2022.100092
Aldossari, M. Q., & Sidorova, A. (2020). Consumer acceptance of internet of things (IoT): Smart home context. Journal of Computer Information Systems, 60(6), 507–517. https://doi.org/10.1080/08874417.2018.1543000
Alotumi, M. (2022). Factors influencing graduate students’ behavioral intention to use Google classroom: Case study-mixed methods research. Education and Information Technologies, 27(7), 10035–10063. https://doi.org/10.1007/s10639-022-11051-2
Alowayr, A. (2022). Determinants of mobile learning adoption: Extending the unified theory of acceptance and use of technology (UTAUT). International Journal of Information and Learning Technology, 39(1), 1–12. https://doi.org/10.1108/IJILT-05-2021-0070
Arain, A. A., Hussain, Z., Rizvi, W. H., & Vighio, M. S. (2019). Extending UTAUT2 toward acceptance of mobile learning in the context of higher education. Universal Access in the Information Society, 18(3), 659–673. https://doi.org/10.1007/s10209-019-00685-8
Arif, T. B., Munaf, U., & Ul-Haque, I. (2023). The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Medical Education Online, 28(1), 2181052. https://doi.org/10.1080/10872981.2023.2181052
Ariyaratne, S., Iyengar, K. P., Nischal, N., Chitti Babu, N., & Botchu, R. (2023). A comparison of ChatGPT-generated articles with human-written articles. Skeletal Radiology, 52(9), 1755–1758. https://doi.org/10.1007/s00256-023-04340-5
Arnold, S. F. (1990). Mathematical statistics. Prentice Hall.
Azizi, S. M., Roozbahani, N., & Khatony, A. (2020). Factors affecting the acceptance of blended learning in medical education: Application of UTAUT2 model. BMC Medical Education, 20(1), 367. https://doi.org/10.1186/s12909-020-02302-2
Buche, A. (2020). BERT for opinion mining and sentiment farming. Bioscience biotechnology research. Communications, 13(14), 35–39. https://doi.org/10.21786/bbrc/13.14/9
Burger, B., Kanbach, D. K., Kraus, S., Breier, M., & Corvello, V. (2023). On the use of AI-based tools like ChatGPT to support management research. European Journal of Innovation Management, 26(7), 233–241. https://doi.org/10.1108/EJIM-02-2023-0156
Car, L. T., Dhinagaran, D. A., Kyaw, B. M., Kowatsch, T., Joty, S., Theng, Y. L., & Atun, R. (2020). Conversational agents in health care: Scoping review and conceptual analysis. Journal of Medical Internet Research, 22(8), e17158. https://doi.org/10.2196/17158
Carvalho, I., & Ivanov, S. (2024). ChatGPT for tourism: Applications, benefits and risks. Tourism Review, 79(2), 290–303. https://doi.org/10.1108/TR-02-2023-0088
Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1), 33. https://doi.org/10.1007/s10916-023-01925-4
Cheng, K., Li, Z., He, Y., Guo, Q., Lu, Y., Gu, S., & Wu, H. (2023). Potential use of artificial intelligence in infectious disease: Take ChatGPT as an example. Annals of Biomedical Engineering, 51(6), 1130–1135. https://doi.org/10.1007/s10439-023-03203-3
Choi, E. P. H., Lee, J. J., Ho, M. H., Kwok, J. Y. Y., & Lok, K. Y. W. (2023). Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Education Today, 125, 105796. https://doi.org/10.1016/j.nedt.2023.105796
Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32(3), 444–452. https://doi.org/10.1007/s10956-023-10039-y
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148
Dajani, D., & Abu Hegleh, A. S. (2019). Behavior intention of animation usage among university students. Heliyon, 5(10), e02536. https://doi.org/10.1016/j.heliyon.2019.e02536
Day, T. (2023). A preliminary investigation of fake peer-reviewed citations and references generated by ChatGPT. The Professional Geographer, 75(6), 1024–1027. https://doi.org/10.1080/00330124.2023.2190373
De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Frontiers in Public Health, 11. https://doi.org/10.3389/fpubh.2023.1166120
Dijkstra, T. K. (2014). PLS’ Janus face – response to professor Rigdon’s ‘rethinking partial least squares modeling: In praise of simple methods. Long Range Planning, 47(3), 146–153. https://doi.org/10.1016/j.lrp.2014.02.004
Dijkstra, T. K., & Henseler, J. (2015). Consistent and asymptotically normal PLS estimators for linear structural equations. Computational Statistics & Data Analysis, 81, 10–23. https://doi.org/10.1016/j.csda.2014.07.008
Dowling, M., & Lucey, B. (2023). ChatGPT for (finance) research: The Bananarama conjecture. Finance Research Letters, 53, 103662. https://doi.org/10.1016/j.frl.2023.103662
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. https://doi.org/10.1007/s10796-017-9774-y
Edumadze, J. K. E., Barfi, K. A., Arkorful, V., & Baffour Jnr, N. O. (2023). Undergraduate student’s perception of using video conferencing tools under lockdown amidst COVID-19 pandemic in Ghana. Interactive Learning Environments, 31(9), 5799–5810. https://doi.org/10.1080/10494820.2021.2018618
El-Masri, M., & Tarhini, A. (2017). Factors affecting the adoption of e-learning systems in Qatar and USA: Extending the unified theory of acceptance and use of technology 2 (UTAUT2). Educational Technology Research and Development, 65(3), 743–763. https://doi.org/10.1007/s11423-016-9508-8
Emenike, M. E., & Emenike, B. U. (2023). Was this title generated by ChatGPT? Considerations for artificial intelligence text-generation software programs for chemists and chemistry educators. Journal of Chemical Education, 100(4), 1413–1418. https://doi.org/10.1021/acs.jchemed.3c00063
Farooq, M. S., Salam, M., Jaafar, N., Fayolle, A., Ayupp, K., Radovic-Markovic, M., & Sajid, A. (2017). Acceptance and use of lecture capture system (LCS) in executive business studies. Interactive Technology and Smart Education, 14(4), 329–348. https://doi.org/10.1108/ITSE-06-2016-0015
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 1–15. https://doi.org/10.1080/14703297.2023.2195846
Frieder, S., Pinchetti, L., Griffiths, R.-R., Salvatori, T., Lukasiewicz, T., Petersen, P., & Berner, J. (2023). Mathematical capabilities of ChatGPT. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.), Advances in neural information processing systems (Vol. 36, pp. 27699–27744). Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2023/file/58168e8a92994655d6da3939e7cc0918-Paper-Datasets_and_Benchmarks.pdf
Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535
Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. npj Digital Medicine, 6(1), 75. https://doi.org/10.1038/s41746-023-00819-6
Gunasinghe, A., & Nanayakkara, S. (2021). Role of technology anxiety within UTAUT in understanding non-user adoption intentions to virtual learning environments: The state university lecturers’ perspective. International Journal of Technology Enhanced Learning, 13(3), 284–308. https://doi.org/10.1504/IJTEL.2021.115978
Gusenbauer, M. (2023). Audit AI search tools now, before they skew research. Nature, 617(7961), 439. https://doi.org/10.1038/d41586-023-01613-w
Hair, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2022). A primer on partial least squares structural equation modeling (PLS-SEM) (3rd ed.). Sage.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2), 139–152. https://doi.org/10.2753/MTP1069-6679190202
Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. (2022). “I think this is the most disruptive technology”: Exploring sentiments of ChatGPT early adopters using twitter data. http://arxiv.org/abs/2212.05856.
Heaven, D. (2018). AI peer reviewers unleashed to ease publishing grind. Nature, 563(7733), 609–610. https://doi.org/10.1038/d41586-018-07245-9
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
Hu, S., Laxman, K., & Lee, K. (2020). Exploring factors affecting academics’ adoption of emerging mobile technologies-an extended UTAUT perspective. Education and Information Technologies, 25(5), 4615–4635. https://doi.org/10.1007/s10639-020-10171-x
Isaeva, E. (2022). Computer-aided instruction for efficient academic writing. In Z. Hu, S. Petoukhov, & M. He (Eds.), Lecture notes on data engineering and communications technologies (Vol. 107, pp. 546–555). Springer. https://doi.org/10.1007/978-3-030-92537-6_50
Ivanov, S., & Soliman, M. (2023). Game of algorithms: ChatGPT implications for the future of tourism education and research. Journal of Tourism Futures, 9(2), 214–221. https://doi.org/10.1108/JTF-02-2023-0038
Khan, R. A., Jawaid, M., Khan, A. R., & Sajjad, M. (2023). ChatGPT-reshaping medical education and clinical management. Pakistan Journal of Medical Sciences, 39(2), 605–607. https://doi.org/10.12669/pjms.39.2.7653
Kumar, J. A., & Bervell, B. (2019). Google classroom for mobile learning in higher education: Modelling the initial perceptions of students. Education and Information Technologies, 24(2), 1793–1817. https://doi.org/10.1007/s10639-018-09858-z
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. International Journal of Management Education, 21(2), 100790. https://doi.org/10.1016/j.ijme.2023.100790
Limayem, M., Hirt, S. G., & Cheung, C. M. K. (2007). How habit limits the predictive power of intention: The case of information systems continuance. MIS Quarterly, 31(4), 705–737. https://doi.org/10.2307/25148817
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750
Macdonald, C., Adeloye, D., Sheikh, A., & Rudan, I. (2023). Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. Journal of Global Health, 13, 01003. https://doi.org/10.7189/JOGH.13.01003
Maican, C. I., Cazan, A.-M., Lixandroiu, R. C., & Dovleac, L. (2019). A study on academic staff personality and technology acceptance: The case of communication and collaboration applications. Computers & Education, 128, 113–131. https://doi.org/10.1016/j.compedu.2018.09.010
Meet, R. K., Kala, D., & Al-Adwan, A. S. (2022). Exploring factors affecting the adoption of MOOC in generation Z using extended UTAUT2 model. Education and Information Technologies, 27(7), 10261–10283. https://doi.org/10.1007/s10639-022-11052-1
Mehta, A., Morris, N. P., Swinnerton, B., & Homer, M. (2019). The influence of values on e-learning adoption. Computers & Education, 141, 103617. https://doi.org/10.1016/j.compedu.2019.103617
Mustafa, S., & Zhang, W. (2022). How to Achieve Maximum Participation of Users in Technical Versus Nontechnical Online Q&A Communities? International Journal of Electronic Commerce, 26(4), 441–471. https://doi.org/10.1080/10864415.2022.2123645
Mustafa, S., Zhang, W., Shehzad, M. U., Anwar, A., & Rubakula, G. (2022). Does health consciousness matter to adopt new technology? An integrated model of UTAUT2 with SEM-fsQCA approach. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.836194
Nistor, N., Baltes, B., Dascǎlu, M., Mihǎilǎ, D., Smeaton, G., & Trǎuşan-Matu, Ş. (2014). Participation in virtual academic communities of practice under the influence of technology acceptance and community factors. A learning analytics application. Computers in Human Behavior, 34, 339–344. https://doi.org/10.1016/j.chb.2013.10.051
OpenAI. (2023). ChatGPT: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/
Osei, H. V., Kwateng, K. O., & Boateng, K. A. (2022). Integration of personality trait, motivation and UTAUT 2 to understand e-learning adoption in the era of COVID-19 pandemic. Education and Information Technologies, 27(8), 10705–10730. https://doi.org/10.1007/s10639-022-11047-y
Oye, N. D., Iahad, A., & N., & Ab.Rahim, N. (2014). The history of UTAUT model and its impact on ICT acceptance and usage by academicians. Education and Information Technologies, 19(1), 251–270. https://doi.org/10.1007/s10639-012-9189-9
Park, S. Y., Nam, M., & Cha, S. (2012). University students’ behavioral intention to use mobile learning: Evaluating the technology acceptance model. British Journal of Educational Technology, 43(4), 592–605. https://doi.org/10.1111/j.1467-8535.2011.01229.x
RAD-on. (2023). Nauczyciele akademiccy w poszczególnych województwach. https://radon.nauka.gov.pl/raporty/nauczyciele_akademiccy_2022
Raman, A., & Don, Y. (2013). Preservice teachers’ acceptance of learning management software: An application of the UTAUT2 model. International Education Studies, 6(7), 157–164. https://doi.org/10.5539/ies.v6n7p157
Raza, S. A., Qazi, Z., Qazi, W., & Ahmed, M. (2022). E-learning in higher education during COVID-19: Evidence from blackboard learning system. Journal of Applied Research in Higher Education, 14(4), 1603–1622. https://doi.org/10.1108/JARHE-02-2021-0054
Reddy, M., Basha, M., & Chinnaiahgari, H. (2021). Dall-E: Creating images from text. Dogo Rangsang Research Journal, 8(14), 71–75. https://www.journal-dogorangsang.in/no_1_NECG_21/14.pdf
Ringle, C. M., Wende, S., & Becker, J.-M. (2022). SmartPLS 4. SmartPLS GmbH.
Samsudeen, S. N., & Mohamed, R. (2019). University students’ intention to use e-learning systems. Interactive Technology and Smart Education, 16(3), 219–238. https://doi.org/10.1108/ITSE-11-2018-0092
Sarstedt, M., Ringle, C. M., & Hair, J. F. (2022). Partial least squares structural equation modeling. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research (pp. 587–632). Springer International Publishing. https://doi.org/10.1007/978-3-319-57413-4_15
Sitar-Taut, D.-A., & Mican, D. (2021). Mobile learning acceptance and use in higher education during social distancing circumstances: An expansion and customization of UTAUT2. Online Information Review, 45(5), 1000–1019. https://doi.org/10.1108/OIR-01-2021-0017
Slade, E. L., Dwivedi, Y. K., Piercy, N. C., & Williams, M. D. (2015). Modeling consumers’ adoption intentions of remote mobile payments in the United Kingdom: Extending UTAUT with innovativeness, risk, and trust. Psychology & Marketing, 32(8), 860–873. https://doi.org/10.1002/mar.20823
Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245. https://doi.org/10.1007/s10755-023-09686-1
Strzelecki, A., & ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. British Journal of Educational Technology, 55(3), 1209–1230. https://doi.org/10.1111/bjet.13425
Tamilmani, K., Rana, N., Dwivedi, Y., Sahu, G. P., & Roderick, S. (2018). Exploring the Role of “Price Value” for Understanding Consumer Adoption of Technology: A Review and Meta-analysis of UTAUT2 based Empirical Studies. In PACIS 2018 Proceedings (p. 64). https://core.ac.uk/download/pdf/301376155.pdf
Tamilmani, K., Rana, N. P., & Dwivedi, Y. K. (2019a). Use of ‘habit’ is not a habit in understanding individual technology adoption: A review of UTAUT2 based empirical studies. In a. Elbanna, Y. K. Dwivedi, D. Bunker, & D. Wastell (Eds.), Smart working, living and Organising (pp. 277–294). https://doi.org/10.1007/978-3-030-04315-5_19.
Tamilmani, K., Rana, N. P., Prakasam, N., & Dwivedi, Y. K. (2019b). The battle of brain vs. heart: A literature review and meta-analysis of “hedonic motivation” use in UTAUT2. International Journal of Information Management, 46, 222–235. https://doi.org/10.1016/j.ijinfomgt.2019.01.008
Tamilmani, K., Rana, N. P., Wamba, S. F., & Dwivedi, R. (2021). The extended unified theory of acceptance and use of technology (UTAUT2): A systematic literature review and theory evaluation. International Journal of Information Management, 57, 102269. https://doi.org/10.1016/j.ijinfomgt.2020.102269
Teo, A. C., Tan, G. W. H., Cheah, C. M., Ooi, K. B., & Yew, K. T. (2012). Can the demographic and subjective norms influence the adoption of mobile banking? International Journal of Mobile Communications, 10(6), 578. https://doi.org/10.1504/IJMC.2012.049757
Tseng, T. H., Lin, S., Wang, Y. S., & Liu, H. X. (2022). Investigating teachers’ adoption of MOOCs: The perspective of UTAUT2. Interactive Learning Environments, 30(4), 635–650. https://doi.org/10.1080/10494820.2019.1674888
Twum, K. K., Ofori, D., Keney, G., & Korang-Yeboah, B. (2022). Using the UTAUT, personal innovativeness and perceived financial cost to examine student’s intention to use E-learning. Journal of Science and Technology Policy Management, 13(3), 713–737. https://doi.org/10.1108/JSTPM-12-2020-0168
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Venkatesh, V., Thong, J., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems, 17(5), 328–376. https://doi.org/10.17705/1jais.00428
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
Wittmann, J. (2023). Science fact vs science fiction: A ChatGPT immunological review experiment gone awry. Immunology Letters, 256–257, 42–47. https://doi.org/10.1016/j.imlet.2023.04.002
Wrycza, S., Marcinkowski, B., & Gajda, D. (2017). The enriched UTAUT model for the acceptance of software engineering tools in academic education. Information Systems Management, 34(1), 38–49. https://doi.org/10.1080/10580530.2017.1254446
Yamane, T. (1967). Statistics: An introductory analysis (2nd ed.). Harper and Row.
Yu, C.-W., Chao, C.-M., Chang, C.-F., Chen, R.-J., Chen, P.-C., & Liu, Y.-X. (2021). Exploring behavioral intention to use a mobile health education website: An extension of the UTAUT 2 model. SAGE Open, 11(4), 1–12. https://doi.org/10.1177/21582440211055721
Zacharis, G., & Nikolopoulou, K. (2022). Factors predicting University students’ behavioral intention to use eLearning platforms in the post-pandemic normal: an UTAUT2 approach with ‘Learning Value.’. Education and Information Technologies, 27(9), 12065–12082. https://doi.org/10.1007/s10639-022-11116-2
Zhang, C., Zhang, C., Li, C., Qiao, Y., Zheng, S., Dam, S. K., Zhang, M., Kim, J. U., Kim, S. T., Choi, J., Park, G.-M., Bae, S.-H., Lee, L.-H., Hui, P., Kweon, I. S., & Hong, C. S. (2023). One small step for generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era. http://arxiv.org/abs/2304.06488.
Zielinski, C., Winker, M., Aggarwal, R., Ferris, L., Heinemann, M., Lapeña, J. F., Pai, S., Ing, E., & Citrome, L. (2023). Chatbots, ChatGPT, and scholarly manuscripts WAME recommendations on ChatGPT and Chatbots in relation to scholarly publications. Afro-Egyptian Journal of Infectious and Endemic Diseases, 13(1), 75–79. https://doi.org/10.21608/aeji.2023.282936
Zwain, A. A. A. (2019). Technological innovativeness and information quality as neoteric predictors of users’ acceptance of learning management system. Interactive Technology and Smart Education, 16(3), 239–254. https://doi.org/10.1108/ITSE-09-2018-0065
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Strzelecki, A., Cicha, K., Rizun, M. et al. Acceptance and use of ChatGPT in the academic community. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12765-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10639-024-12765-1