Keywords

Introduction

When we introduce to our students the idea of establishing a “culture of academic integrity,” we typically frame the discussion primarily around the responsibilities of students. In my own syllabi as an instructor, I talk about academic integrity embodying everything from explicit skills like correctly citing sources, to more intangible concepts like coming to class prepared and treating colleagues with respect. Faculty know that they, too—whether through scholarly associations, granting agencies, or their own internal ethical compass—have a commitment to academic integrity. But when we frame academic integrity for students, that component is rarely discussed. It’s common in higher education circles to hear about the notion of creating a culture of academic integrity (Hendershott et al, 2000), but definitions of what such an academic culture would look like vary widely (Macfarlane et. al., 2014). While most institutions focus on student responsibilities vis-à-vis academic integrity, it is rarer to see the conversation extend to the responsibilities of faculty, though the power of faculty modeling—such as with citations in slide decks and careful attention to reuse of content like images—is significant in helping students to understand the norms of their discipline regarding academic integrity (Robinson & Glanzer, 2017). Institutional policies and campus training programs focus on students. But modelling a culture of academic integrity begins with what happens in the classroom and across the institution, and as violations of academic integrity go high-tech with the rise of contract cheating, institutions must reckon with the way their data privacy and educational technology policies and practices fail to model academic integrity. In this chapter, I argue that the contract cheating epidemic can be insulated against with a renewed attention to ethical pedagogical strategies in the deployment of educational technologies. Given the explosive growth of the contract cheating problem and the huge money it makes for unethical players, it is imperative that post-secondary institutions protect students by all possible means, including examining their own cultures of academic integrity in the digital space. Canadian higher education—and indeed, higher education globally—has not to this point had a sector-wide conversation about ethics in educational technologies, and our institutions as a result often engage in practices and contracts with private, for-profit companies that include data agreements that would never pass an equivalent of a Research Ethics Board (Stewart, 2020).

This chapter focuses specifically on the failure of academic integrity known as contract cheating. This is cheating that takes the form of students hiring third parties—sometimes across the globe—to submit one or all of their assignments in a class for them. Contract cheating is not a situation of individual students in a course making poor choices: it’s a business. There has always been opportunity for students with financial means to purchase coursework from colleagues, but now the gig economy and the easy ability to transfer data means that people—perhaps as far away as Kenya (Lancaster, 2019a)—are earning comparatively small amounts of money to produce intellectual labour which is then sold to desperate students in North America and Europe, particularly in English-speaking countries, for big profit (Rigby, 2014). It’s hard to track by traditional means either pedagogical or technological; if every assignment in a course is created by someone else, the instructor has no grounds for comparison, and many of these companies proudly proclaim their high originality scores on Turnitin because, of course, the work is original—it just wasn’t written by the student (Cadloff, 2018; Peterson, 2019). As a result, there is probably more of it happening even than we think (Eaton, 2020). And it makes a lot of money globally (Rigby, 2014; Ellis et al., 2018).

I am interested, though, in how the same practices we see engaged in by institutions and educational technology companies—a devaluing and de-prioritization of original work when it comes to the classroom space, predatory relationships between consumers and service providers, and a cavalier approach to handling data privacy—are echoed and magnified by contract cheating firms. I argue that a more responsible relationship to student data, and a less cozy relationship with for-profit educational technologies, is required if our institutions are serious about fostering a culture of academic integrity.

De-Valuing and De-Prioritizing Original Work in the Classroom

At the 2019 Academic Integrity Day held at Thompson Rivers University, student union groups repeatedly raised the issue of reciprocity and respect as critical to conversations about why students cheat. One student union representative specifically asked how students can be expected to feel a sense of responsibility over their own learning in courses increasingly taught using tools like for-profit homework systems or coursewareFootnote 1 and publisher-provided lecture slides: “If you can buy your lectures and assignments,” he asked the assembled professors and administrators, “why can’t I buy my submissions?”Footnote 2 This echoes Julia Christensen Hughes’ observation from reviewing student explanations of academic dishonesty that “students cheat when they feel cheated” (Christensen Hughes, 2017, p. 57). Added to this are the privacy and security issues enmeshed in homework systems, which at least in part monetize student data while simultaneously charging them for the opportunity to submit their coursework in classes for which they are already paying tuition (UNESCO IITE, 2020; Senack et al., 2016). Although there are good and compelling reasons why professors opt to use these kinds of tools, including an increasingly precarious professoriate, the pressures of producing research, and the additional stresses imposed by restructuring institutions in the wake of Covid-19, institutions, the message we send to students becomes deeply complicated. What does it mean to foster a culture of academic integrity across a university, and what does it mean to act with integrity—and does the responsibility only fall on students? If we truly wish to model academic integrity, the answer to the latter question must be no: if the stakes of academic integrity truly matter, our students should see us all embodying it (Morris, 2016).

As universities integrate ever more completely with large, for-profit educational technologies companies, they burrow ever deeper into troubling agreements where student data is monetized now or in perpetuity. In addition to troubling data practices, many of the solutions heralded as tools in the fight for academic integrity, whether plagiarism detection software or e-proctoring platforms, engage in questionable ethical practices like profiling student behaviour (Swauger, 2020). Turnitin, for example, has access to wide swaths of student data in the form of essays and assignments, which they mine in order to be able to compare submissions to their database to assess whether student work has been copied. Their business model relies on receiving student intellectual property for free—students, of course, are not compensated for providing the content for their database—and has expanded to include a Revision Assistant tool for students that is also built from this massive amount of student data (Stommel & Morris, 2017). Revision Assistant, designed to help students identify errors in their writing prior to submission, is, in essence, a machine-taught tool to improve writing based on the vast swaths of student writing Turnitin’s larger database can analyze. Are students fully informed about where their data is going, in this context, and who is profiting from it? Increasingly, we’re seeing student groups advocate for more transparency in the use of Turnitin, and for opt-out policies to be made more explicit. Much of the way Turnitin is used in Canada—far less ubiquitously than elsewhere–is due to Jesse Rosenfeld’s landmark legal action against Turnitin; Rosenfeld argued successfully that students should not be presumed guilty, which is the position from which all plagiarism detection software begins, but also that students should not be compelled to waive their copyright in order to be evaluated (Purdy, 2005). Instructors can ask to have the work their students submit deleted from the Turnitin database, but they have to know to ask. Turnitin has always downplayed the data mining they do, but it is the backbone of their ability to offer their service. It’s also what makes them attractive to venture capitalists. In March 2019, Turnitin was acquired by a venture capital firm for US$1.75B, which gives you a sense of what all that uncompensated student intellectual property and mined data is worth (Luke, 2019).

And many of these companies impose on the central tenets at the core of university life, by holding institutions to restrictive non-disparagement clauses that limit the freedom of speech and critique of students, staff, and faculty alike, even sometimes when they are never party to the original agreement. The CEO of TopHat, for example, likes to talk about how much data they have access to, and that they can drill down into it enough to analyze individual student study habits (Zubair, 2017). But the End User License Agreement for TopHat includes provisions that students cannot link to TopHat in an article critical of its use, students are responsible for any data breeches that occur, and they offer no opt-outs for the collection of personal data beyond opting out of the service altogether (Rhinelander, 2017). When educational institutions contract with these firms, they believe the gain is greater that the loss; this is the same logic that drives desperate students to the essay mill, and it’s equally misguided in both cases. If our integrity as institutions can be chipped away at through predatory agreements that disrupt the very mission of our institutions, can we be surprised when we see students engaging in the equivalent?

Predatory Relationships with For-Profit EdTech

For-profit educational technology companies—in contrast with open or publicly-funded edtech projects—can make money in a limited number of ways: they can sell an individual or an institution one-shot products, they can offer a subscription model that is institution-paid or student-paid, or they can sell the data they collect. Or some combination of the above. Many of the agreements our institutions sign with these companies give explicit rights to use student data for things like “targeted marketing,” and opt-outs are complicated and Byzantine (American Association of University Professors, n.d.). In 2012, for example, the former textbook, now primarily data, company Pearson boasted that they have more access to student data in K-12 than anyone in the world (Office of EdTech, 2012). In the higher education space, Pearson is the biggest player, and they have some incredible access to student data, including everything from financial aid applications to interim and final grades. They say they don’t sell student data, but they also publicly refused to sign the Student Privacy Pledge. And recently, the inevitable happened: a data breech, exposing data from 13,000 institutions and one million college students. The attack occurred in November of 2018, but Pearson waited to inform the FBI until the following March, and end users were not notified until August. While Pearson asserts that the breech was “limited” to first and last name, date of birth, and email address—enough to do a fair amount of damage!—it impacted data collected as early as 2001 (Olson, 2019). The roll out of the disclosure (and the disappearance of the statement from their website) suggests that the top priority in this instance wasn’t ever student data, but brand management.

When students contract with cheating companies, they usually are required to hand over a lot of personal data, some of which should be specifically of concern to universities. Students give up credit card information, yes, and, distressingly, their social insurance numbers (SIN; the primary government identification number in Canada, similar to the SSN in the US); they also give up their student ID numbers and Learning Management System login information to companies with very limited privacy protections (Sutherland-Smith & Dullaghan, 2019). Certainly, giving up a credit card number and SIN can have far-reaching impacts for the student for years to come; handing over a student ID and login information is how contract cheating firms, once established at a university, experience exponential growth. Once a contract cheating firm has private student information, students become easily exploited; contract cheating companies have been reported to continue to extract money from students well after they have completed the course for which they contracted the service in the first place, and when students stop paying the extortion, the companies report the original act of plagiarism to the university (Yorke et al, 2020). The threat of this happening can keep a student in a cycle of contracting with the company, or recommending the service to their classmates, since they believe they will be reported if they do not. Further, I know from my own observation that once a contract cheating firm has access to the Learning Management System at an institution, their access to students grows rapidly: they can access course materials, view assignments, and directly message other students in the course in order to find new clients. They can access the data not only of the person for whom they are uploading assignments, but they can access class lists and steal the intellectual property of the class professor, too; this material will likely be reused and resold by the company. And depending on the practices of the company, it can be hard for IT Services to flag and block this activity.

Why do students feel comfortable sharing something as critical to the integrity of the university as their LMS login information in the first place? There is a lot of rhetoric about so-called digital nativeFootnote 3 students and their supposed disinterest in data privacy, but research does not support this assumption; on the contrary, students are increasingly concerned about these issues, but a lack of information and a lack of empowerment drives poor choices (Hargiatti & Marwick, 2016). Our traditional-age students have been raised in an informational ecosystem where companies like Google and Facebook know everything about them (and increasingly, they have had a presence on these networks before they were meaningfully able to consent to it) and then they go to university and interact daily with companies like Pearson, Turnitin, and TopHat who are monetizing their data with the express consent of their institutions. Perhaps instead of questioning their lack of care about their privacy and data, we might question whether they have ever felt empowered to resist these corporations, and it’s worth considering whether institutions have treated seriously enough the fiduciary responsibility they hold over data when it comes to Big Tech.

Contract Cheating as Case Study

The practices by which contract cheating companies find our students are worth exploring as one example of this kind of shadowing between the emergence of practices in higher education and their exploitation by companies whose influence we seek to reduce. Algorithms are all around us, and they drive a substantial amount of decision-making. They also are not inherently bad or damaging: in its most basic form, it is a set of instructions that determines a series of outputs from a series of inputs. And we are surrounded by algorithmic processes all the time. It’s an algorithm that drives what Netflix thinks you want to watch next, what Instagram thinks you might click-through to buy, and what Google anticipates you want from your search results. These experiences are sometimes creepy—like when a store you shop at knows before you do that you are pregnant (Hill, 2012)—and often aggressively capitalist, but they aren’t necessarily explicitly harmful.

Algorithms are not neutral, however. Instead, algorithms reflect the old adage of “garbage in, garbage out,” which is to say that whatever biases underwrite the programming of an algorithm will be reflected in its outputs (Stinson, 2020). And, since we live in a society that wrestles with racism, sexism, classism, ableism, and many other inequities, we should not be surprised that algorithms are often built in a way that encompasses many of these inequities. Virginia Eubanks described the use of algorithms in the development of social programming as an “empathy override,” a decision to outsource perceptions about who “deserves” care (Eubanks, 2018). This is a way of not having harder and more complex political conversations, and it relies on a scarcity model of resourcing social programs and care. Those are conversations that are important to have and will be shaped by individual values, but we have to have them, and not hide behind assumptions that these processes are somehow neutral.

What algorithms, for example, make decisions about who is a good bet for a mortgage or business loan, and what assumptions underlie those parameters? We see algorithms used to redraw community boundaries to further disenfranchise the poor and the marginalized. There’s a term for this: digital redlining (Gilliard, 2017). Indeed, just as old-fashioned analog redlining worked in the service of segregation and reduced class mobility, digital redlining has a direct impact on socioeconomic mobility. Algorithmic processes are increasingly used by credit bureaus to analyze your social media connections, making judgements about financial solvency in part based on a subject’s friends and relations (Waddell, 2016). Critically, a person’s network is not a protected class, so while it may be illegal for an employer or lender to discriminate based on race, gender, or ability, it’s not illegal to discriminate based on algorithmic assumptions made that are in turn based on a person’s network (Boyd et al., 2014). Consider how much more of your network is documented and searchable now than ever before; your connection to a person the lender sees as undesirable is no longer theoretical or circumstantial, but instead comes with a lengthy data trail. Even though the realities of the people within a network may well be framed and circumscribed by those protected factors, nothing protects marginalized users from having this data turned against them. Which is to say: isn’t this just a fancy way to get around traditionally racist and classist practices?

Contract cheating firms are very aware of the power of algorithms—it’s how they find their clients. In Thomas Lancaster’s work describing how social media is used by contract cheating firms, he’s effectively describing an algorithmic process when he reflects on how “A single tweet by a student, even one expressing that they have an assignment due with no indication that they plan to cheat, can lead to them receiving 20 or more visible replies from contract cheating providers within an hour from when the tweet is made” (Lancaster, 2019b). These aren’t human beings scanning social media: these are bots. They phish for students in incredibly predatory ways, using algorithmic processing of key words to track students on social media and pounce when they are most vulnerable. If your institution has a hashtag it uses to collect student posts on social media, you can see this for yourself by following it for a little while, especially around midterms and finals. You’ll quickly find these companies using institutional hashtags to reach students, often cloaking their services in terms of “editing” or “tutoring” or “help.” It’s easy, especially if you’re not versed in institutional branding—or you’re just panicked and looking for any lifeline, and wanting it to be real—to see some of these posts and wonder if they’re legitimately connected to the institution itself. And the companies also use these hashtags to track students as potential customers. They particularly like to use combinations of hashtags that pair a specific institution with words expressing affective experiences of student stress: #essaydue #finalsstress #essayhelp. So a student who is looking to commiserate with classmates on Instagram who uses the hashtag for her institution and, maybe, #freakingout #paperdue #needhelp, sends a bat signal not only for her classmates, but also for predatory contract cheating firms who sweep in to her direct messages at the last moment and offer “assistance.”Footnote 4

While it is never okay to purchase an essay, it’s easy to imagine a situation where desperation combined with opportunity results in an individual making a choice they shouldn’t. Given the spiralling rise in contract cheating, it doesn’t seem likely that students are suddenly less ethical than they used to be, and research suggests that cheating is a highly contextual act, and even those students who seem to be predisposed to contract cheating typically do not engage in it for every assessment (Ramberg & Modin, 2019; Rundle et al., 2019). Students are targeted by predatory companies when they are at their most panicked and most stressed out, and it’s a form of quote-unquote “help” that they can access when they are at that lowest point—say, 2 am the morning before a paper is due—when legitimate resources like learning centres and campus tutors and office hours aren’t available. Contract cheating is wrong. Preying on vulnerable students, and profiting off their misery, is more wrong.Footnote 5

Solutions

The barriers we place on learners, intentionally or not, can exacerbate the stakes and promote the fears and feelings that lead students to cheat. Whether it’s a high-cost homework system that leaves a student financially vulnerable, or inaccessible technology that can’t be accessed easily for students without stable internet connections, or a classroom environment that doesn’t allow students to adapt content for their own learning, all of these unnecessary barriers impede the ability of a student to succeed in a course. Each barrier brings additional stress. These barriers can also damage the relationship between faculty and student—and this relationship, too, has an insulating effect on student rates of cheating (Orosz et al., 2015). When students feel responsible to a class and valued by their instructor, they cheat less.

Many other writers in this collection will point to what we know about pedagogical strategy for reducing the temptation to cheat: scaffolded assignments, low-stakes practice, and reducing anxiety around performance. The research shows that these strategies work. We know that students are more likely to cheat on high stakes assignments where they have received little guidance. Conveniently, we also know that those same kinds of assignments do little to promote meaningful learning. Research suggests that we can both promote learning and reduce stress and anxiety for students by scaffolding assignments appropriately, checking in at multiple stages, and providing opportunities for questions and feedback (Rundle et al., 2019). Authentic assessments, too, that students can see reflect clearly the expectations of the world outside the university, have a meaningful impact in terms of lowering rates of academic dishonesty (Medway et al., 2018). In truth, the tools we know work for good instruction work to reduce the temptation for engaging a contract cheating company in the first place (ICAI, 2016).

I propose we can compound the efficacy of these pedagogical interventions by helping students understand the value of their data and privacy, and modeling our respect for it in the way we use theirs: give students a full understanding of why their data has value, and then don’t sell it out to the highest bidder on their behalf and without their informed consent. Does everyone understand the seriousness of handing over your SIN, or the responsibility to other people in your classes inherent to keeping your login information secure? In this case, not sharing is caring: we protect ourselves and each other when we keep our learning tools secure. And it’s worth talking about how we secure learning materials, as well. Textbook question banks and homework system assignments are incredibly useful, time-saving tools for faculty members. They’re also incredibly insecure. If you use a question bank produced by a textbook manufacturer or other third-party, I encourage you to do an experiment: take a random question from your bank and Google it. If that resource is remotely popular, it’s likely that searching out that one question will bring up the entire question bank. While it’s certainly true that some students will use test banks as study tools, it’s also not a far skip to cheating—and be certain, contract cheating companies already have access to all of that content, too. When a lot of users are drawing on the same resource, it only takes one bad decision by one person to have all that material available on the open web. It’s a good reason to invest in building these tools on-site, particular to each unique context.

Conclusions

It may seem like an extreme leap to connect the dots from an efficiency practice like using anti-plagiarism software or class-in-a-box courseware systems to the nefarious rise of contract cheating within our institutions. But what do we really mean when we assert a commitment to a culture of academic integrity? Is it acting with integrity to allow student data to be monetized without compensation or to require students to subscribe to expensive services to submit assignments? Is it acting with integrity to offer no meaningful opt-out from or true informed consent for the use of these tools? And to return to that question from a student union representative at Academic Integrity Day, what do we model when it comes to the value of original work when institutional pressures like large class sizes and insufficient prep time lead us to lean on expensive—and, from an academic integrity perspective—wildly insecure tool like a homework system? We are responsible for modelling for our students what it looks like to be a contributing member of an academic community, and we do so by taking seriously our students, their data, and their work, and not only when it comes time to run it through a plagiarism detector or check their IDs against a proctoring software. When institution serve students “course-in-a-box” content solutions; require them to engage with third-party, for-profit entities to submit their assignments; treat their data as valueless while someone else earns a profit; and sign away their rights to critique tools they are forced to use, we do not model academic integrity. Instead, we demonstrate that their education has been commodified, with each component bought and sold by interested parties.

To return to the words of the student union representative: if an instructor works within institutional pressures so egregious that the reasonable solution is to buy the lectures and assessments—and charge the student a premium for the experience—we lose the moral ground to say they cannot buy their submissions in turn. Academic integrity collapses when we fail to uphold our moral and ethical obligations, long before a student chooses to cheat.