Abstract
Student academic misconduct continues to vex higher education institutions in the United States and internationally. The COVID pandemic learning environment yielded more rather than less reports of student academic misconduct. Substantial empirical research has considered the nature of academic misconduct in higher education institutions by identifying its antecedents and correlates. But given the reproducibility crisis in social research, the quality of knowledge that students have on academic misconduct warrants further empirical corroboration. With the intent to replicate, this study used Quantitative Content Analysis to examine 2631 written responses from first-year undergraduate students as they participated in academic misconduct programming implemented by a public university in the United States. Results reported a staggering proportion of first-year students possess piecemeal (at best) or non-existent (at worst) knowledge over citations/references and cheating. Furthermore, such proportions are uneven according to specific college membership. Results corroborate prior research that first-year undergraduate students hold limited understanding of academic misconduct in its premises, patterns, and processes. In turn, results support the design and use of systematic preventive mechanisms to address academic misconduct among higher education institutions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Student academic misconduct (AM), defined in this study to mean plagiarism and cheating, vexes higher education institutions (HEIs) in the United States and internationally. Substantial empirical research spanning several decades has estimated that at minimum half of students reported to have engaged in AM behaviors during their time in university [7, 29, 35, 39, 45, 65]. In the COVID pandemic learning environment, as learning transitioned to online platforms and as students and faculty bemoaned corresponding dips in instructional access and quality, both scholarship and public commentary pointed to increased engagement in AM behaviors [16, 28, 32, 54]. The high incidence of AM presents an existential threat to the fundamental mission of HEIs to verify coursework mastery because it distorts inferences on the learning process. Additionally, despite high rates of AM across campuses, public scrutiny and condemnation on AM cases convey a still underlying public expectation in HEIs to teach and graduate students who are bona fide competent by faculty members who are bona fide competent as well [17, 20, 27].
To address academic misconduct, substantial empirical research has considered the nature of AM in HEIs by identifying its antecedents and correlates. Empirical research has investigated myriad student level and institution level factors which predict AM engagement. At the student level, extrinsic motivation, academic stress, psychopathy, and past underachievement have been linked with greater rates of AM [14, 45, 47, 49]. At the institution-level, faculty and administrators often address AM cases with idiosyncratic responses that often default to lenient consequences or outright ignoring [4, 11, 26, 31, 42, 45]. Such responses have been faulted to facilitate campus cultures of indifferent regard towards AM among students and so heighten rates of AM engagement. Notably, student knowledge of AM behaviors/consequences has been extensively scrutinized as one major explanation to AM engagement [5, 45]. Specifically, students report a lack awareness and understanding to the basic terms and assumptions (e.g., what is plagiarism and why does it matter?), situational contexts (e.g., when and where can plagiarism occur?), and HEI policies (e.g., how do programs address plagiarism?) which imbue AM [3, 8,9,10, 12, 34, 36, 41, 53].
Ample scholarship has attributed university students’ AM unawareness to derive from a throughline of haphazard secondary school experiences that address what AM means and why it matters—if ever at all [15, 15, 24], Jensen et al. [69], [45, 63], Waltzer et al. [70]. Notably, Johansen et al. [33] surveyed 1654 upper secondary students across several European countries on their knowledge over AM (specifically plagiarism, collaboration, and data use), certainty over navigating AM situations; and the extent of having received secondary school instruction over AM. They found that the great majority of secondary students expressed they knew what AM entails and how to act accordingly with little doubt in their knowledge nor capacity to do so. Yet they also found the great majority of secondary students reported no formal systematic secondary school instruction on AM; instead many students construed their AM knowledge and behaviors from informal ad hoc interactions like discussions with family members.
This lack of awareness and understanding is a problem because it corresponds to a knowledge-to-practice gap where we expect students to act with academic integrity without accurate vocabulary, procedures, and conceptual models to the umbrella of behaviors it comprises and the consequences that occur [21, 46]. To expect so parallels asking a person to act with punctuality without reference to vocabulary like minutes, procedures like deadlines, or concepts like priority. Without accurate knowledge to guide deliberate behavior, how then can HEIs expect academic integrity in the short-term when students opt to engage in AM? Further yet, how can we presume long-term shifts in HEI climates marked by academic integrity when its members have no common knowledge to start and sustain it?
2 Empirical gap
Despite sizable empirical investigation on students’ lack of awareness and understanding to AM, studies have often used recognition tasks to estimate such awareness and understanding where students are surveyed to report the extent they agree or disagree that certain items count as academic misconduct and just how egregious such items may be (e.g., discussing exam questions with a classmate) [3, 60]. The problem with recognition tasks is that they presume respondents have the very same knowledge in vocabulary, procedures, and conceptual models to perceive items as survey researchers would. Essentially, recognition tasks conflate extent of agreement with researchers’ AM knowledge to correspond with direct estimates of students’ AM knowledge as expressed in their own working words and models. Consequently, lack of student AM knowledge may speak to lack of agreement with survey researchers rather than lack of working terms and models. Such claims are problematic because they can skew intervention efforts to inadvertently focus on bolstering surface-level agreement rather than reinforcing the basic operating terms, procedures, and values that imbue AM.
Addressing the limitation of recognition tasks, Locquiao & Ives [41] conducted a study that prompted 356 first-year university students with a production task to write open-ended responses on what they knew about citations/references and cheating. The rationale with open-ended writing was that responses would better capture more complete and more complex knowledge to AM as students had the opportunity to identify pertinent vocabulary, elaborate on points, and present example/scenarios around AM. Among other findings, the study reported that the great majority of first-year university students held basic superficial knowledge of AM which often defaulted to assignment procedures (e.g., citation styles) or consequences from being flagged (e.g., point deductions). Few students described what AM looked like and where/when it occurs. In addition, a sizeable number of students responded with inaccurate or irrelevant points to AM (e.g., references correspond to resume contacts). Such findings affirmed and extended prior empirical research by pointing to limited AM knowledge in not just recognizing but articulating the operating vocabulary, procedures, and conceptual models which imbue AM.
Yet two major gaps temper inferences from Locquiao and Ives’ [41] study. Their study results comprise one out of a handful of studies that have examined AM through a production task instead of a recognition task [3, 60, 61, 66]. Empirical inquiry compels researchers to corroborate claims. This drive to verify is even more urgent given the reproducibility crisis in social research where growing scholarship and public commentary have decried a “crisis of confidence” in not attaining the same results despite conducting the same study designs—results whose repeated citations over time have come to ground foundational concepts/models (Anvari & Lakens, [2], [23], Open Science [55, 67]). Furthermore, their study generated estimates from university students aggregated into a single sample, despite the possibility that estimates could have varied when disaggregated into institutional units like specific colleges membership. The lack of corroboration and refinement undermine not just working knowledge of what exists, but also undercut stakeholder interventions and programming that have been designed and funded to match the actual characteristics of university students.
3 Purpose of study and research questions
The present study seeks to add to research literature by addressing the above gaps in AM research as a replication study to Locquiao and Ives’ [41] study design which prompted first-year undergraduate students to write what they knew about citations/references and cheating. The present study asked two research questions—with attendant hypotheses—that together consider the quality of AM knowledge among first-year undergraduate students overall and across specific colleges:
Question 1: What quality of citations/reference knowledge do first-year university students have overall? And to what extent does such knowledge associate with specific colleges?
-
Null hypothesis 1a. There exists a random distribution in citations/references knowledge among first-year university students overall.
-
Alternative hypothesis 1a. There exists a non-random distribution in citations/references knowledge among first-year university students overall.
-
Null hypothesis 1b. There exists no association between citations/references across specific colleges.
-
Alternative hypothesis 1b. There exists an association between academic misconduct knowledge across specific colleges.
Question 2: What quality of cheating knowledge do first-year university students have overall? And to what extent does such knowledge associate with specific colleges?
-
Null hypothesis 2a. There exists a random distribution in cheating knowledge among first-year university students overall.
-
Alternative hypothesis 2a. There exists a non-random distribution in cheating knowledge among first-year university students overall.
-
Null hypothesis 2b. There exists no association between cheating knowledge across specific colleges.
-
Alternative hypothesis 2b. There exists an association between cheating knowledge across specific colleges.
4 Methods
The current study served as a replication to Locquiao and Ives’ [41] prior study. Replication in this instance refers to using the same methods in collection, screening, and analysis to the maximum extent possible with a new dataset to determine the extent that the same results occur from a prior study [50, 56]. Contextual parameters to the prior study sufficiently shifted to warrant replication. Since Locquiao and Ives’ [41] study, several first-year student cohorts over several academic years have proceeded through the partner university’s mandated AM orientation programming, and in turn, the partner university collected a more comprehensive dataset on first-year student cohorts from multiple colleges rather than a couple colleges. The current study sought to again use the same methods to determine the extent that the same results occur given changed contextual parameters of more established programming and a new more comprehensive dataset.
As a replication, the current study again used a descriptive-correlation research design to inform data collection, analysis, and interpretation of results [57]. This design best addressed research questions which again asked for frequencies and distributions in AM knowledge reported by first-year university students. The present study was descriptive rather than experimental because it measured target variables without researcher manipulation. And the study was correlational because it sought to again determine the extent that such frequencies and distributions in types to AM knowledge associated to specific categories like college membership.
Authors presented the study protocol and informed consent procedure to the Institutional Review Board (IRB) of University of Nevada-Reno for exempt review given that the study involved typical review of instruction in the classroom setting. Informed consent for this study was established as a prerequisite for enrollment with the partner university. Participants were informed that they must complete mandatory orientation experiences including AM programming before they start university coursework. Furthermore, participants were informed that programming data would be de-identified, prepared, and used for institutional reports and scholarship. The study protocol and informed consent procedure were given IRB ethical approval by the partner university as exempt research.
5 Sample characteristics and rationale
The study examined quality of academic misconduct knowledge among first-year undergraduate students enrolled in a research-intensive doctoral-granting Western US public university. The university operates with broad administrative units called colleges which offer specific degree programs and related coursework. Participant students were all enrolled as first-year undergraduate students during the 2021–2022 academic year. Students were classified under one of eight constituent colleges (Agriculture Biotechnology and Natural Resources, Business, Community Health Sciences, Education, Engineering, Journalism, Liberal Arts, and Science) or classified as Undeclared depending on their degree program of interest.
Typical demographic variables (e.g., race/ethnicity) were not disclosed with the shared secondary dataset because the online-asynchronous programming did not ask for students’ demographic variables. Good-faith collaboration with the partner university stipulated that demographic variables not be recorded to preserve student anonymity in addressing a sensitive topic with candid responses. Consequently, sample demographic characteristics must be inferred to comprise mostly young adults ranged 18–22 years old and to reflect the social characteristics of the university’s constituent region. The shared secondary dataset comprised 2679 student cases. Researchers deleted 42 cases that did not meet inclusion criteria in the codebook described below. The final analytic sample then comprised 2631 overall student cases with n1 = 194 (Agriculture Biotechnology and Natural Resources); n2 = 400 (Business), n3 = 370 (Community Health Sciences), n4 = 88 (Education), n5 = 488 (Engineering), n6 = 42 (Journalism), n7 = 284 (Liberal Arts), n8 = 533 (Science), and n9 = 232 (Undeclared) student cases for each classification. Post hoc power analysis using G*Power reported that the sample size yielded power level β = 0.92, which exceeded social research conventions to detect small associations, Cramer’s V = 0.10, set to α = 0.05 [19, 22, 40].
6 Data collection
As with the Locquiao and Ives [41] study, the HEI context which imbued this study was that first-year undergraduate students were mandated to participate in online-asynchronous programming during the 2021–2022 academic year. The programming reviewed major topics to AM in a sequence of modules (e.g., behavioral definitions to AM, examples of AM in and beyond the university setting, factors that predict AM, and consequences to AM in and beyond the university setting). Data collection was conducted through an online learning management system (i.e., WebCampus) which produced spreadsheets of student responses to permit coding and statistical analysis. The partner university compiled, de-identified, and shared the secondary dataset with the Authors after the 2021–2022 academic year ended. As mentioned above, good-faith collaboration with the partner university meant the shared secondary dataset did not record student demographic data besides specific college membership.
7 Target variables
7.1 Student knowledge of academic misconduct
Two variables were selected as the focus to this study. The first variable was student knowledge of AM in terms of citations/references and cheating. Students answered open-ended prompts as they accessed certain modules. For this study, student responses to two prompts were selected for analysis because they were given before review of module content and were inferred to represent pretest snapshots of student AM knowledge on citations/references and cheating. The two prompts asked (a) “What have you learned about citations and references before starting college?” and (b) “What have you learned about cheating on exams or cheating on assignments before starting college?”.
As reported in the codebook (see Appendix), student knowledge on citations/references was defined to mean valid responses that either (a) clearly referenced the salient idea of accurate attribution of others’ work or (b) clearly described a relevant situation or procedure grounded in accurate attribution of others’ work. Student knowledge on test/assignment cheating was defined to mean valid responses that either (a) clearly referenced the salient idea of unauthorized or undisclosed means that afford student advantage in assignments and tests or (b) clearly described a relevant situation or procedure grounded in affording student advantage in assignments and tests. The rationale for the above definitions derived from AM research literature which has generally defined plagiarism to entail many different forms of inaccurate attribution and cheating to entail many different forms of unauthorized advantage in coursework [6, 45].
To examine quality of student knowledge over AM, student responses on citations/references and cheating were recorded as either [Beginner Knowledge] or [Advanced Knowledge] in their coverage and complexity. The rationale with screening for beginner and advanced knowledge derived from educational psychology research which has affirmed that more rather than less elaboration (unpacking claims with additional claims) conveys deep structural knowledge over superficial knowledge [1, 51]. As reported in the codebook, Authors assumed a charitable approach in coding where student responses were marked as [Beginner Knowledge] if they at minimum referenced either a salient idea OR a relevant situation or procedure grounded in citations/references and cheating. In turn, student responses that were marked as [Advanced Knowledge] presented both a salient idea AND a relevant situation or procedure grounded in citations/references and cheating (no matter how terse of an elaboration).
7.2 College membership
To determine associations between student responses across specific colleges, the target grouping variable was students’ official college membership as they registered with the mandated AM programming. College membership was recorded as a categorical variable which corresponded to eight constituent colleges (e.g., 2 = Business). College membership also included a category which marked students who did not yet have a declared major and in turn were not covered by any of the constituent colleges. As mentioned above, the partner university did not record dual-/triple-majors or minors across multiple constituent colleges. So each student case was marked to represent just one primary constituent college. The rationale with considering college membership derives from prior research that found students of certain majors (e.g., Business) tend to report more instances of academic misconduct than students of other majors [30, 35, 44, 58, 59]. Furthermore, examining college membership presents the opportunity to consider programming/initiatives around academic misconduct targeted towards cohorts at the group level who have sorted themselves into a discipline and may share an underlying habitus—ways of perceiving, thinking, doing, and valuing (Bourdieu, [68]).
8 Data analysis
8.1 Coding process and reliability check
Student responses to open-ended prompts were selected as the unit of analysis. Researchers used Quantitative Content Analysis (Quant-CA) as described by Neuendorf [52] to transform student responses into categorical values to permit statistical analysis. Quant-CA interprets written qualitative data as message units to discrete conceptual patterns which are in turn represented by numerical codes. In doing so, researchers have categorical numbered data to engage in descriptive statistical analyses like determining the extent that specific conceptual patterns occur in a set of cases or testing the extent that specific conceptual patterns relate with each other.
Quant-CA contrasts from other forms of thematic analysis because it considers qualitative data under the paradigm of quantitative assumptions and methods [52]. Notably, Quant-CA prompts researchers to create a codebook of a priori codes to identify message units as discrete conceptual patterns before formal review rather than during or after review. The rationale behind this process is to test inferences rather than affirm inferences after the fact. Researchers prepared a codebook (see Appendix) that described, explained, and justified criteria for assignment of message units to discrete conceptual patterns as represented by categorical values. Researchers coded each student response with a single categorical value that best matched a priori criteria.
The Quant-CA coding process involved two major steps: drafting codebook criteria and checking codebook criteria. Both authors had full access to the entire cohort of student responses. The first author reviewed research literature on superficial versus deep conceptual knowledge to generate initial codes with corresponding criteria. The author then reviewed and revised codebook criteria over multiple iterations to bolster conceptual validity and reliability in criterion language. The author presented the penultimate draft of the codebook to a student researcher in preparation for interrater reliability check. For the interrater reliability check, the author and the student researcher were assigned to code the same randomized sample which comprised 202 cases (7.5%) of 2679 total cases in the first round of data collection. This percentage was deemed an appropriate sample that accommodates Quant-CA convention and feasibility to conduct reliability checks over a large qualitative dataset [52].
The first author and the student researcher reviewed each response item for each case within the randomized sample to the full set of criteria. The author then compiled the coded sample to conduct interrater reliability checks for each response item. Percent absolute agreement among raters reported 81% for [Citations/References Knowledge] and 83% for [Cheating Knowledge] which conveyed moderate-substantial agreement. Furthermore, for response items with sufficient variance (where raters assigned a mix of different codes across cases), Cohen’s k calculations reported 0.68 for [Citations/References Knowledge] and 0.69 for [Cheating-Knowledge which conveyed substantial agreement after accounting for random error [38]. In turn, criterion language in the penultimate draft was adopted as the final version of the full codebook. The author then proceeded to analyze the total sample of 2679 cases with the full codebook.
8.2 Statistical analysis
Both research questions asked for frequency counts and the extent such frequency counts correspond to non-random distributions in quality of AM knowledge. Therefore after a priori coding, authors conducted descriptive frequency counts followed by inferential Chi Square Goodness of Fit Test (One-Way) and Chi Square Test of Independence (Two-Way) which determined the extent that observed frequency counts diverge from expected frequency counts (Cohen, 1988). In preparation for testing, each case was recorded with only one categorial value (as described in the above protocol) to meet the assumption of independence in observations in Chi Square testing. Essentially, responses were not counted across multiple conditions (e.g., recorded to embody both preconventional and postconventional moral reasoning). Furthermore, authors reviewed descriptive frequency counts for the extent each category yielded sufficient responses to meet the assumption of adequate expected cell counts to Chi Square testing [37]. Categories with markedly insufficient expected cell counts were excluded from Chi-Square testing. Insufficient cell counts emerged in sub-categories of college membership (specifically, Education and Journalism) under cheating knowledge which prompted exclusion of both sub-categories from two-way testing. This in turn narrowed the scope of results to Question 1b.
9 Results
Tables 1 and 2 present One-Way Chi Square Test results with exemplary student responses that captured the overall nature of responses around each discrete code. Tables 3 and 4 present Two-Way Chi Square test results. Given markedly insufficient cell counts described above, Two-Way estimates for cheating knowledge excluded Education and Journalism cases. Several patterns emerge from the results. First, most responses reported [Beginner] knowledge in both citation/references (n = 1299, 49.4%) and cheating (n = 1670, 63.5%) in their responses; but a non-trivial number of responses conveyed [N/A] statements to both citations/references (n = 707, 26.9%) and cheating (n = 866, 32.9%) as well. As highlighted in the exemplary student responses, [N/A] construal to citations/references frequently revolved around penalties or testimonials, tenuous tangents, or lack of formal instruction. [Beginner] construal to citations/references often discussed just the procedural mechanics to citations/references in their shape (e.g., MLA, APA, Chicago) and place (e.g., after a sentence, on the last page, around quotations) without underlying principle of giving credit—in the context of academic projects. In turn, [N/A] construal to cheating often revolved around matter-of-fact exhortations to not cheat—without further elaboration to what cheating means or entails. [Beginner] construal to cheating overwhelmingly discussed the risk and repercussions to one’s grade, standing, reputation, etc. with being caught cheating. And a conspicuous number of responses demonstrated [Advanced] knowledge (n = 625, 23.8%) in discussing the premises, patterns, and processes of citations/references; but in contrast, markedly fewer responses elaborated upon cheating (n = 95, 3.6%). One-Way Chi Square estimates to both citations/references knowledge and cheating knowledge corroborated above uneven counts as statistically significant non-random distributions at the overall university level (χ2(2) = 308.42 and χ2(2) = 1414.47).
Second, Two-Way Chi Square results reported that responses for each college paralleled the aggregate university level distribution where most responses reflected [Beginner] knowledge; followed by [N/A] knowledge; and then rounded out with [Advanced] knowledge. But the proportion of responses greatly varied across different colleges. In contrast with n = 27 (21.1%) responses within Community Health Sciences, n = 127 (31.8%) responses within Business and n = 77 (33.2%) cases within Undeclared reported [N/A] citations/references knowledge. In contrast with n = 52 (26.8%) responses within Agriculture Biotechnology and Natural Resources, n = 155 (38.8%) responses within Business and n = 171 (35.0%) responses within Engineering reported [N/A] cheating knowledge. Two-Way Chi Square estimates corroborated such uneven proportions as statistically significant non-random distributions for citations/references knowledge and cheating knowledge at the level of specific college membership (χ2(16) = 28.84 and χ2(12) = 23.65). Specific college membership yielded a small but appreciable association (Cramer’s V = 0.07) with the quality of citations/reference knowledge and cheating knowledge among first-year undergraduate students.
10 Discussion
Substantial empirical research has considered the nature of AM in higher education institutions by identifying its antecedents and correlates. But such factors warrant further empirical corroboration. As a replication, this study contributes to research literature by verifying the quality of first-year student knowledge over AM. Two major inferences emerge from the results. With respect to 1a and 2a hypotheses, frequency counts and inferential estimates suggest that at the university level, a substantial and skewed distribution of first-year students hold inaccurate/irrelevant or fragmented knowledge about the point, presentation, and procedures of citations/references and cheating. With respect to 1b and 2b hypotheses, inferential estimates further convey that such skewed distributions swell or shrink across the level of specific college membership. The results corroborate prior empirical research that has pointed to undeveloped AM knowledge among university students [3, 9, 10, 34, 36, 53]. And the results implicate how a throughline of haphazard secondary experiences on AM carries over at the start of university studies despite raised standards and consequences around AM ([15, 15, 24], Jensen et al. [33, 45, 63, 69], Waltzer et al. [70]).
More specifically, the results affirm Locquiao and Ives’ [41] portrait of the typical first-year undergraduate student as they navigate coursework. Upon learning that a course expects citations/references, the typical first-year student would dutifully apply any one citation style and mechanics like quotations, parentheses, footnotes, etc. But over rote execution, that typical student would not understand the consequences nor underlying point of citations/references as in part recognition to the intellectual record. And depending on their chosen major, that typical student may have even less working knowledge on citations/references. Similarly, upon learning that a course prohibits cheating, the typical first-year student would understand how cheating bodes a cavalcade of consequences. But that typical student would not be able to name just how cheating manifests nor the underlying point of cheating as in part undue advantage. And depending on their chosen major, that typical student may have even less working knowledge on cheating.
The immediate implication for practice is HEIs ought to presume that first-year undergraduate students possess fragmented (at best) or non-existent (at worst) knowledge over the point, presentation, and procedures of AM as they begin university coursework. HEIs may reasonably expect limited knowledge as an operating assumption given the throughline of haphazard secondary experiences on AM. Furthermore, HEIs ought to anticipate the likelihood that first-year students enrolled to specific colleges may hold disproportionally greater instances of fragmented or non-existent AM knowledge. In line with longstanding exhortations by McCabe [43], Bretag et al. [8, 48], Saddiqui [62], Eaton and Hughes [18], and Cullen [13] for coherent and comprehensive HEI initiatives to address AM, study results justify the design and implementation of holistic multilevel program models as outlined by Stephens [64] marked with universal prevention and tiered interventions. Universal prevention means all university students participate in common AM programming (e.g., instruction on point, presentation, and procedures) at the start of their studies,and tiered intervention means that AM programming delivers tailored supports (e.g., academic mentorship, routine plagiarism checks, in-person rather than online exams) to specific university students for whom universal programming does not appear to suffice. A universal mechanism would not only address the staggering proportion of first-year students with limited knowledge at the onset of their university studies; but also correct that throughline of haphazard secondary school experiences on AM. Universal AM programming that presents more accurate and more complete AM knowledge would circumvent most potential instances of AM. A tiered mechanism then would address the smaller proportion of university students who warrant correction and reinforcement to act upon AM knowledge.
11 Conclusion and limitations
This study sought to replicate one of the handful of studies that examined university students AM knowledge through a production task rather than a recognition task. It presents empirical evidence which corroborates the claim that first-year undergraduate students hold piecemeal knowledge over the point, presentation, and procedures of AM. But there exist several limitations to the study which temper its results and inferences. First, prompts were delivered as open-ended responses to mandated programming which means there may have existed little incentive for student participants to elaborate upon their answers beyond perfunctory submission to proceed with the rest of the mandated programming. The risk exists then that the open-ended responses conflated a sense of expediency for bona fide knowledge.
Second, as with Locquiao and Ives’ [41] approach, results and inferences are moderated by the limitation where the study transformed qualitative data into numbered data. This process risked the loss of detail, nuance, and specificity behind student responses which in turn present threats to construct validity and interrater reliability. While the process of generating, vetting, and analyzing students’ responses under a codebook mitigates such threats, the possibility remains that data transformation cut other salient characteristics to AM knowledge from review. Third, the coding protocol adopted a charitable perspective in interpreting each discrete case to one of three possible categories. The risk is that the coding protocol captured a coarser quality of student knowledge over AM which may have regarded gray area situations as described by Goddiksen et al. [25] like “asking for help from a parent/guardian over homework” as cheating because they fit the criterion of unauthorized or undisclosed student advantage. While the cheating criterion over the coding process worked as intended to capture instances that were not explicitly prohibited by an instructor (e.g., writing crib notes to the side pages of a book-only exam), the risk remained that gray area situations were included as well. In any case, this study does not serve as a final comment to AM knowledge among first-year undergraduate students. Instead this study adds empirical evidence to the shape and scope of AM knowledge; and in doing so, this study presents HEIs with a more robust outline of gaps to address.
Data availability
Data not available due to confidentiality of student information.
References
Anderson LW, Krathwohl DR, Bloom BS, Benjamin S. A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. Harlow: Longman; 2001.
Anvari F, Lakens D. The replicability crisis and public trust in psychological science. Compr Results Soc Psychol. 2019;3(3):266–86. https://doi.org/10.1080/23743603.2019.1684822.
Ashworth P, Bannister P, Thorne P. Guilty in whose eyes? University students’ perceptions of cheating and plagiarism in academic work and assessment. Stud High Educ. 1997;22(2):187–203. https://doi.org/10.1080/03075079712331381034.
Amigud A, Pell DJ. When academic integrity rules should not apply: a survey of academic staff. Assess Eval High Educ. 2021;46(6):928–42. https://doi.org/10.1080/02602938.2020.1826900.
Brimble M. Why students cheat: an exploration of the motivators of student academic dishonesty in higher education. In: Bretag T, editor. Handbook of academic integrity. Singapore: Springer; 2016.
Bretag T, editor. Handbook of academic integrity. New York: Springer; 2016.
Bretag T, Harper R, Burton M, Ellis C, Neeee P, Rozenberg P, van Haeringen K. Contract cheating a survey of Australian university students. Stud High Educ. 2019;44(11):1837–56. https://doi.org/10.1080/03075079.2018.1462788.
Bretag T, Mahmud S, Wallace M, Walker R, McGowan U, East J, James C. ‘Teach us how to do it properly!’ an Australian academic integrity student survey. Stud High Educ. 2013;39(7):1150–69. https://doi.org/10.1080/03075079.2013.777406.
Burrus RT, McGoldrick K, Schuhmann PW. Self-reports of student cheating: does a definition of cheating matter? J Econ Educ. 2007;38(1):3–16. https://doi.org/10.3200/JECE.38.1.3-17.
Carpenter DD, Harding TS, Finelli CJ. Using research to identify academic dishonesty deterrents among engineering undergraduates. Int J Eng Educ. 2010;26(5):1156–65.
Coren A. Turning a blind eye: faculty who ignore student cheating. J Acad Ethics. 2011;9(4):291–305. https://doi.org/10.1007/s10805-011-9147-y.
Crook S, Cranston J. Punished but not prepared: an exploration of novice writers’ experiences of plagiarism at university. Can Perspect Acad Integr. 2021;4(1):40–69. https://doi.org/10.1157/cpai.v4i1.70974.
Cullen CS. Pivoting from punitive programs to educational experiences: knowledge and advice from research. J Coll Character. 2022;23(1):48–59. https://doi.org/10.1080/2194587X.2021.2017973.
Curtis GJ, Clare J, Vieira E, Selby E, Jonason PK. Predicting contract cheating intentions: dark personality traits, attitudes, norms, and anticipated guilt and shame. Pers Individ Differ. 2022;185:111277. https://doi.org/10.1016/j.paid.2021.111277.
Davis SF, Drinan PF, Gallant TB. Cheating in school: what we know and what we can do. Hoboken: Wiley; 2009.
Dey S. Reports of cheating at colleges soar during the pandemic? National Public Radio. 2021; https://www.npr.org/2021/08/27/1031255390/reports-of-cheating-at-colleges-soar-during-the-pandemic
Downes M. University scandal, reputation, and governance. Int J Educ Integr. 2017;13(1):1–20. https://doi.org/10.1007/s40979-017-0019-0.
Eaton SE, Christensen Hughes J. Academic misconduct in higher education: beyond student cheating. In: Eaton SE, Hughes JC, editors. Academic integrity in Canada. Cham: Springer International Publishing AG; 2022.
Ellis PD. The essential guide to effect sizes: statistical power, meta-analysis, and the interpretation of research results. Cambridge: Cambridge University Press; 2010.
Engler JN, Landau JD, Epstein M. Keeping up with the Joneses: students’ perceptions of academically dishonest behavior. Teach Psychol. 2008;35(2):99–102. https://doi.org/10.1080/00986280801978418.
Farley-Ripple E, May H, Karpyn A, Tilley K, McDonough K. Rethinking connections between research and practice in education: a conceptual framework. Educ Res. 2018;47(4):235–45. https://doi.org/10.3102/0013189X18761042.
Faul F, Erdfelder E, Lang A, Buchner A. GPower 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175–91. https://doi.org/10.3758/BF03193146.
Feilden, T. (2017). Most scientists can’t replicate studies by their peers? British Broadcasting Company. https://www.bbc.com/news/science-environment-39054778
Galloway MK. Cheating in advantaged high schools: prevalence, justifications, and possibilities for change. Ethics Behav. 2012;22(5):378–99. https://doi.org/10.1080/10508422.2012.679143.
Goddiksen MP, Willum Johansen M, Armond AC, Centa M, Clavien C, Gefenas E, Lund TB. Grey zones and good practice: a European survey of academic integrity among undergraduate students. Ethics Behav. 2023;34(3):199–217. https://doi.org/10.1080/10508422.2023.2187804.
Gottardello D, Karabag SF. Ideal and actual roles of university professors in academic integrity management: a comparative study. Stud High Educ. 2022;47(3):526–44. https://doi.org/10.1080/03075079.2020.1767051.
Hartocollis A. The next battle in higher ed may strike at its soul: scholarship. New York: The New York Times; 2024.
Ives B. University students experience the COVID-19 induced shift to remote instruction. Int J Educ Technol High Educ. 2021. https://doi.org/10.1186/s41239-021-00296-5.
Ives B, Alama M, Mosora LC, Mosora M, Grosu-Radulescu L, Clinciu AI, Dutu A. Patterns and predictors of academic dishonesty in Romanian university students. High Educ. 2017;74(5):815–31. https://doi.org/10.1007/s10734-016-0079-8.
Ives B, Giukin L. Patterns and predictors of academic dishonesty in Moldovan university students. J Acad Ethics. 2020;18(1):71–88. https://doi.org/10.1007/s10805-019-09347-z.
Ives B, Nehrkorn A. A research review: post-secondary interventions to improve academic integrity. In: Velliaris DM, editor. Prevention and detection of academic misconduct in higher education. Pennsylvania: IGI Global; 2019. p. 39–62.
Jenkins BD, Golding JM, Le Grand AM, Levi MM, Pals AM. When opportunity knocks: college students’ cheating amid the COVID-19 pandemic. Teach Psychol. 2022;50(4):407–19. https://doi.org/10.1177/00986283211059067.
Johansen MW, Goddiksen MP, Centa M, Clavien C, Gefenas E, Globokar R, Lund TB. Lack of ethics or lack of knowledge? European upper secondary students’ doubts and misconceptions about integrity issues. Int J Educ Integr. 2022;18(1):20–5. https://doi.org/10.1007/s40979-022-00113-0.
Jordan AE. College student cheating: the role of motivation, perceived norms, attitudes, and knowledge of institutional policy. Ethics Behav. 2001;11(3):233–47. https://doi.org/10.1207/S15327019EB1103_3.
Jurdi R, Hage HS, Chow HPH. Academic dishonesty in the Canadian classroom: behaviors of a sample of university students. Can J High Educ. 2011. https://doi.org/10.4767/cjhe.v41i3.2488.
Keener TA, Galvez Peralta M, Smith M, Swager L, Ingles J, Wen S, Barbier M. Student and faculty perceptions: appropriate consequences of lapses in academic integrity in health sciences education. BMC Med Educ. 2019;19(1):209–209. https://doi.org/10.1186/s12909-019-1645-4.
Kroonenberg PM, Verbeek A. The tale of Cochran’s rule: my contingency table has so many expected values smaller than 5. What am I to do? Am Stat. 2018;72(2):175–83. https://doi.org/10.1080/00031305.2017.1286260.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159. https://doi.org/10.2307/2529310.
Lin C, Wen L. Academic dishonesty in higher education: a nationwide study in Taiwan. High Educ. 2007;54(1):85–97. https://doi.org/10.1007/s10734-006-9047-z.
Lipsey M.W., Puzio K., Yun C., Hebert M.A., Steinka-Fry K., Cole M.W., Roberts M., Anthony K.S., & Busick M.D. (2012). Translating the statistical representation of the effects of education interventions into more readily interpretable forms. National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education.
Locquiao J, Ives B. First-year university students’ knowledge of academic misconduct and the association between goals for attending university and receptiveness to intervention. Int J Educ Integr. 2020;16(1):1–19. https://doi.org/10.1007/s40979-020-00054-6.
McCabe DL. Faculty responses to academic dishonesty: the influence of student honor codes. Res High Educ. 1993;34(5):647–58. https://doi.org/10.1007/BF00991924.
McCabe DL. It takes a village: academic dishonesty & educational opportunity. Lib Educ. 2005;91(3):26–31.
McCabe DL, Treviño LK. Cheating among business students: a challenge for business leaders and educators. J Manag Educ. 1995;19(2):205–18.
McCabe DL, Treviño L. Klebe, Butterfield KD. Cheating in college why students do it and what educators can do about it. Baltimore: Johns Hopkins University Press; 2012.
McIntyre D. Bridging the gap between research and practice. Camb J Educ. 2005;35:357–82.
Miller AD, Murdock TB, Anderman EM, Poindexter AL. 2 - Who are all these cheaters? Characteristics of academically dishonest students. In: Anderman EM, Murdock TB, editors. Psychology of academic cheating. Cambridge: Elsevier Academic Press; 2007. p. 9–32.
Morris EJ, Carroll J. Developing a sustainable holistic institutional approach: dealing with realities “on the ground” when implementing an academic integrity policy. In: Bretag T, editor. Handbook of academic integrity. Singapore: Springer; 2015.
Murdock TB, Anderman EM. Motivational perspectives on student cheating: towards an integrated model of academic dishonesty. Educ Psychol. 2006;41(3):129–45. https://doi.org/10.1207/s15326985ep4103_1.
National Academies of Sciences, Engineering, and Medicine. Reproducibility and replicability in science. Washington, DC: The National Academies Press; 2019.
National Research Council. How people learn: brain, mind, experience, and school. Washington, DC: The National Academies Press; 2000.
Neuendorf KA. The content analysis guidebook. Thousand Oaks: Sage Publication; 2002.
Newton P. Academic integrity: a quantitative study of confidence and understanding in students at the start of their higher education. Assess Eval High Educ. 2016;41(3):482–97. https://doi.org/10.1080/02602938.2015.1024199.
Newton P, Essex K. How common is cheating in online exams and did it increase during the COVID-19 Pandemic? A systematic review. J Acad Ethics. 2023. https://doi.org/10.1007/s10805-023-09485-5.
Rahal RM, Kleinberg B, Crusius J, Tio P. Estimating the reproducibility of psychological science. Science. 2015. https://doi.org/10.1126/science.aac4716.
Patil P, Peng RD, Leek JT. A statistical definition for reproducibility and replicability. bioRxiv. 2016. https://doi.org/10.1101/066803.
Price PC, Jhangiani RS, Chiang I-CA, Leighton C, Cuttler C. Research methods in psychology. 3rd ed. Montreal: PB Pressbooks; 2017.
Quah CH, Stewart N, Wee J. Attitudes of business students’ toward plagiarism. J Acad Ethics. 2012;10:185–99.
Rakovski CC, Levy ES. Academic dishonesty: perceptions of business students. Coll Stud J. 2007;42(2):466–81.
Risquez A, O’Dwyer M, Ledwith A. ‘Thou shalt not plagiarise’: from self-reported views to recognition and avoidance of plagiarism. Assess Eval High Educ. 2011;38(1):34–43. https://doi.org/10.1080/02602938.2011.596926.
Roig M. Can undergraduate students determine whether text has been plagiarized? Psychol Record. 1997;47(1):113–22. https://doi.org/10.1007/BF03395215.
Saddiqui S. Engaging students and faculty: examining and overcoming the barriers. In: Bretag T, editor. Handbook of academic integrity. Singapore: Springer; 2016.
Schab F. Schooling without learning: thirty years of cheating in high school. Adolescence. 1991;26(102):839–48.
Stephens JM. Creating cultures of integrity: a multilevel intervention model for promoting academic honesty. In: Bretag T, editor. Handbook of academic integrity. Singapore: Springer; 2016.
Trost K. Psst, have you ever cheated? A study of academic dishonesty in Sweden. Assess Eval High Educ. 2009;34(4):367–76. https://doi.org/10.1080/02602930801956067.
Waltzer T, Dahl A. Students’ perceptions and evaluations of plagiarism: effects of text and context. J Moral Educ. 2020;50(4):436–51. https://doi.org/10.1080/03057240.2020.1787961.
Yong E. How reliable are psychology studies? The Atlantic. 2015; https://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/
Bourdieu P. Outline of a theory of Practice. Cambridge University Press. 1977.
Jensen LA, Arnett JJ, Feldman SS, Cauffman E. It’s wrong, but everybody does it: Academic dishonesty among high school and college students. Contemp Educ Psychol. 2002;27(2):209–228. https://doi.org/10.1006/ceps.2001.1088
Waltzer T, DeBernardi FC, Dahl A. Student and teacher views on cheating in high school: Perceptions, evaluations, and decisions. J Res Adolesc. 2022;33(1):108–126. https://doi.org/10.1111/jora.12784
Acknowledgements
Authors wish to thank NevadaFIT of the University of Nevada-Reno, especially Felicia DeWald and their staff for data access. Authors also wish to thank Abby Gronlund for support with the codebook reliability check. And Authors wish to thank the anonymous reviewers for sharing insights and feedback to improve the manuscript.
Funding
This study was not supported by any funding.
Author information
Authors and Affiliations
Contributions
All authors contributed to this work. The percentage of contributions is indicated by the authorship order. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The second author was employed by the university that implemented the AM programming mentioned in the study. This study does not serve the financial interest or benefit of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Locquiao, J., Ives, B. Replication to first-year undergraduate students’ knowledge of academic misconduct. Discov Educ 3, 99 (2024). https://doi.org/10.1007/s44217-024-00190-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44217-024-00190-y