Introduction: The Kafkaesque World of the UK University

In 2014, Marina Warner, professor of English and celebrated novelist (Dame Warner, DBE, FRSL, FBA), suddenly left her post at Essex University. Writing in the London Review of Books, she recounted the events leading to her resignation, starting with a meeting chaired by the Vice-Chancellor, Anthony Forster:

The Senate had just approved new criteria for promotion. Most of the candidates under review had written their submissions before the new criteria were drawn up, yet these were invoked as reasons for rejection. As in Kafka’s famous fable, the rules were being (re-)made just for you and me. I had been led to think we were convened to discuss cases for promotion, but it seemed to me we were being asked to restructure by the back door. Why these particular individuals should be for the chop wasn’t clear from their records. Cuts, no doubt, were the underlying cause, though they weren’t discussed as such. At one point Forster remarked aloud but to nobody in particular: ‘These REF stars—they don’t earn their keep’ (Warner 2014).

At that stage, U.K. universities were still obsessively focused on meeting the demands of the government’s latest research assessment exercise, the ‘Research Excellence Framework’ (REF), a five or six-yearly research evaluation exercise which determined a large part of universities’ budgets. Little did academics know that the criteria for funding had suddenly changed:

Everyone in academia had come to learn that the REF is the currency of value. A scholar whose works are left out of the tally is marked for assisted dying. So I thought Forster’s remark odd at the time, but let it go. It is now widely known—but I did not know it then—that the rankings of research, even if much improved, will bring universities less money this time round than last. So the tactics to bring in money are changing. Students, especially foreign students who pay higher fees, offer a glittering solution. Suddenly the watchword was ‘Teaching, Teaching, Teaching’ (Warner 2014).

Warner had recently been invited to chair the Man Booker International Prize for 2015. Her Dean had encouraged her to accept—and promised to cover her teaching duties—and the Vice-Chancellor had written a letter of congratulation, enthusiastic about the prestige this would bring—and evidence of her research ‘impact’—a key criterion for the REF. However, a few months later the university’s priorities had shifted. The executive dean for humanities now presented Warner with the university’s ‘Tariff of Expectations’ with 17 targets, and her success in meeting them would be assessed twice a year. Suddenly, the promises of adjusting her workload to meet her public commitments evaporated and her ‘workload allocation’ became impossible to reconcile with the commitments she had been urged to accept. If she could not teach whilst chairing the Man Booker prize committee, the university asked her to take a year’s unpaid leave: In that way they would save her salary, yet her research would still count towards the next REF and earn the university future income. ‘I felt that would set a bad precedent’, wrote Warner: ‘other colleagues, younger than me, with more financial responsibilities, could not possibly supervise PhD students, do research, write books, convene conferences, speak in public, accept positions on trusts or professional associations, and all for no pay’. So she resigned.

Marina Warner’s story highlights a number of significant features of the shifting—and often obtuse—higher education policy regimes and their often anxiety-inducing and subjectifying effects. Warner likens her situation to Kafka’s protagonist, Joseph K, who is permanently wrong-footed by the ever-changing and inscrutable rules of the administration. In her case, what had changed were the key policy drivers of the university funding system. Teaching had always yielded the central and relatively stable funding of departments, whereas research funding depended on the fluctuating outcomes of the REF assessments. In 2010, the government suddenly removed direct funding for teaching and transferred the resources into loans that students could take out to pay higher fees—but with the growing likelihood that these loans will never be fully recouped (McGettigan 2013). The new basis for departments’ and institutions’ financial viability lay in attracting ever-increasing numbers of high fee-paying students and to this end staff resources were concentrated on achieving high ‘student satisfaction’ scores for teaching. Alongside the goals of pursuing ‘research excellence’ and achieving ‘world class’ status, UK universities are also subject to an annual National Student Survey (NSS) to measure student satisfaction with their degrees and a Teaching Excellence Framework (TEF) that the government hoped could be used to link student-intake numbers to an institution’s reputation for quality teaching (more on this below). As Warner’s case illustrates, these shifting and cumulative workload priorities created incompatible demands on the individual academic’s time and energy. In this paper, we set out to map the features of this higher education regime and assess its implications for university futures. We ask, how are these disciplinary regimes of ranking and performance indicators changing institutional behaviour and transforming academic subjectivities, and at what cost? What kind of governance regime is the proliferation of ‘audit culture’ in higher education producing?

Context: Universities and the Rise of Audit Culture

Warner’s allusion to Kafka is both fitting yet problematic. ‘Kafkaesque’ is greatly overused as a term to describe almost any situation where individuals are confronted with a bizarre and impersonal bureaucracy they feel powerless to control or understand (Edwards 1991). As most dictionaries define it, the Kafkaesque situation usually entails having a nightmarishly complex, confusing, bizarre and illogical quality.Footnote 1 While the goal posts for reputationmanagement and funding keep changing, unlike in Kafka’s castle, there is a fathomable rationale behind these shifting priorities that relates to changes in the political economy of higher education. As Slaughter and Rhoades (2004:17) put it, universities provide the two ‘raw materials’ of the global knowledge economy; the knowledge and graduates that can be converted into innovative products. However, whereas in the past universities were called upon to support their governments’ attempts to make their countries more globally competitive, now they are regarded as economic players themselves and integral drivers of that economy—including through ‘export education’ and the trade in international students (Wright and Ørberg 2017).

In a world composed of competing states each struggling to increase its share of capital and footloose assets in an increasingly mobile, insecure and risk-averse global knowledge economy, the role of national governments is now often depicted as one of finding and galvanizing into productivity the underproductive, under-utilized and dormant capacity in the sector as a whole—including the unharnessed potential of each individual. Various government reports on higher education reform have termed this ‘realising our potential’ (UK Cabinet Office 1993) or harnessing the sector’s ‘untapped capacities’. This explains the plethora of attempts to render universities more accountable through ever-more elaborate and calculative systems of measurement and auditing—what we have elsewhere termed the rise of ‘audit culture’ (Shore and Wright 1999, 2015a, b). In turn, the ranked results of these competitive audit systems are linked to differential funding. Within this punitive system, winners are rewarded with funding and prestige, while losers are named, shamed and have their resources withdrawn and reallocated to more successful competitors, thereby placing them further in jeopardy—what Warner aptly terms ‘assisted dying’. According to the rationales of neoliberal governments, this system of economic rewards incentivizes institutions and individuals to mobilize all their resources so that they become more efficient and productive. In the eyes of many government ministers and those higher education reformers who believe that outsourcing and commercialization are the solution to current funding shortages, academics are basically ‘lazy’ and ‘inward looking’ and prone to teaching from dusty old lecture notes, while leaving their more valuable ideas languishing in the bottom drawer of their desks. The role of the ‘competition state’ is to incentivize academics and university managers to activate these dormant resources and untapped human capital by putting them to work for the benefit of the economy.

The mobilisation of these supposedly under-exploited resources requires a new set of disciplinary technologies for steering institutions, reorganizing work and incentivising desired changes in academic behaviour. The introduction of these new steering systems—which include benchmarks, output targets, workload allocations, performance appraisals, and various measures of quality and productivity—does far more than simply incentivize behavioural changes: they have a transformative effect on social relations and academic subjectivities. They alter the way individuals see their work, their institution, and themselves. While some policy makers contend that standardized measures create better opportunities for personal and professional advancement—because they make performance expectations more explicit and transparent—others experience them as a source of deep anxiety and insecurity. As Bovbjerg’s (2011) research shows, opening oneself up to an institutional gaze where one is unable to predict or control the way supposedly objective information will be used is inherently stress inducing. However, these mechanisms of measurement and audit are extremely effective in raising productivity and enabling managers to govern ‘at a distance’, as many university senior leaders have discovered. This emphasis on ‘governing by numbers’ and the utility of calculative practices is often seen as a central feature of governmentality, which suggests that, for academia and other professions, the ‘roll-out’ phase of neoliberalisation is far from over (Peck and Tickel 2002).

How best to theorise these developments? Among the most notable concepts and frameworks that have been advanced to help explain these trends are ‘academic capitalism’ (Slaughter and Leslie 1999) and the ‘entrepreneurial university’ (Marginson and Considine 2000). Other authors have deployed suggestive epithets to capture the transformation of the sector, ranging from the ‘Fall of the Faculty’ (Ginsberg 2011) and ‘Wannabe U’ (Tuchman 2011), to ‘University Inc’ (Washburn 2005), ‘College for Sale’ (Shumar 1997), ‘The Exchange University’ (Chan and Fisher 2008) and the ‘University in Chains’ (Giroux 2007). What all these books share is a critique of the way higher education has become progressively more marketized and commoditized. While we do not disagree with these analyses, we suggest that another useful theoretical lens for understanding the transformation of universities today is through the concept of ‘audit culture’. By this term (itself another suggestive epithet) we mean the processes of enumeration, calculation, measuring, monitoring and accounting that have elevated auditing from a narrow set of practices used to assure the integrity of finances to an instrument of management and a general principle of social organization. Audit ‘culture refers to the manner in which whole areas of work and life have been refashioned—and some would say colonized—by the logics of financial accounting. As Marilyn Strathern (2000, p. 2) has observed, ‘[p]rocedures for assessment have social consequences’. They create regimes based on the ‘twinned precepts of economic efficiency and ethical practice’—ethical because they are predicated on claims about transparency and accountability. Audit thus creates a space where ‘the financial and the moral meet’ (Strathern 2000); where visibility supposedly induces legibility, probity and efficiency.

The growth of audit has been accompanied by the rise of new actors and industries geared to producing indicators, inventing systems for measuring outputs against targets, and generating rankings in order to raise performance and productivity. Like the world described in Kafka’s books The Trial and The Castle, this new bureaucracy produces a frustrating and arbitrary controlling system with which academics, like K’s fellow villagers, try to comply even though they often realise auditing in pursuit of ‘world class’ is a futile chase after an unfathomable and unobtainable goal. Auditing is effectively a new form of knowledge/power (i.e. a new configuration of what Foucault termed disciplinary power) with new sets of professionals creating new kinds of proprietorial knowledge and also new ways of extracting surplus and profit. In this respect, audit culture is both cause and effect of itself: not only do its regimes of accountability recreate organisations by rendering them auditable, they also create the raw material that feeds the expansion of the auditing and accounting industries. In the context of higher education, these technologies often have an authoritarian character: the ‘tyranny of transparency’ (Strathern 1998)—or ‘coercive commensurability’ (Brenneis et al. 2005)—is one of the key reasons why universities have lost the ability to run themselves or act as self-governing institutions.

Measurement and Quantification of Everything

Universities—and education systems more generally—have long been sites where the testing, marking and grading of individuals have been instruments of ranking and discipline, and in many countries such assessments continue to serve as vehicles for the reproduction of elites. In recent decades, however, this process has been extended. No longer are pupils and students the only ones subject to regular performance assessments; now whole institutions, including their professionals, administrators and leadership teams must contend with the imperative of continually improving performance.

The imperative to perform is wonderfully exemplified in Espeland and Sauder’s (2007) analysis of the ranking of U.S. law schools. Even though many law school deans view these rankings as absurd, calling them an ‘idiot poll’, ‘Mickey Mouse’ ‘plain wacky’ and ‘totally bonkers’ (Sauder and Espeland 2009, p. 68), every decision they take is now made with a view to its effects on their college’s rankings. The rankings have become ‘omnipresent’ and impossible to avoid. Any drop in a law school’s position has immediate repercussions on student recruitment and hence, on income, with cuts, redundancies and loss of reputation as inevitable consequences. The rankings they take most seriously are those published by US News and World Report, an American media company founded in 1948 by conservative newspaperman David Lawrence. At the time of Lawrence’s death in 1973, this magazine had reached a circulation of over two million and subsequently became a major competitor to Time and Newsweek. However, in 2010 it changed to an online-only format and switched its business to ranking services. The company now produces rankings across a vast swathe of areas, from ‘Best Doctors and Medicare Plans’ and ‘Best Pensions’, to ‘Best Cars’, ‘Best Vacations’, ‘Best Hotels’, ‘Best Real Estate Agents’, ‘Best Financial Advisors’, and ‘Top-Performing Funds’ (US News and World Report2016). It also publishes an annual ‘Best College Guide’ that ranks all types of colleges, and this has become the most important source of information for prospective students when deciding which programmes to choose. Indeed, even when it was still a magazine, the spike in sales for its annual ‘Best College Guide’ was so high that this became popularly known as their ‘swimsuit edition’. However, the methodologies used to construct these league tables are questionable and far from scientifically robust (Wright 2012). As Gladwell (2011) points out, 20% of the overall grade comes from ‘Faculty Resources’, which is calculated from a weighted combination of class size, faculty salary, percentage of professors with highest degree, student-faculty ratio, and percentage of full-time faculty. These measures are bad proxies for education and do not capture in any way how a college informs, inspires and challenges students. Another category—‘Undergraduate Academic Reputation’ (22.5% of the mark)—is based on a survey of presidents, provosts and administrative deans who are asked to grade 261 national universities: ‘[w]hen a president is asked to assess the relative merits of dozens of institutions he [sic] knows nothing about, he relies on their ranking’ (Gladwell 2011, our emphasis). In short, reputation and ranking become a mutually constitutive circuit. The rankings induce involuntary ‘reactivity’ and their unwilling endorsement by the deans ‘makes these shaky measures pervasive, and generative of the organisation itself’ (Sauder and Espeland 2009, p. 68).

These are just some of ways that information is provided to students as ‘consumers’ so that they can make more informed, rational choices when selecting their courses. In England, education qualityevaluations were traditionally uncoupled from issues of funding as university teaching was covered by a block grant from government. However, since 2004, the economic survival of universities has increasingly come to depend on their reputation, rankings and ability to attract fee-paying students. This began with the introduction of a market in fees that year by the New Labour Government, but was massively amplified after 2010, when the coalition Liberal-Democrat and Conservative government took the highly controversial decision to triple university fees and withdraw funding for all teaching except for the STEM subjects. Currently, one of the main sources of information for students (and parents) choosing university courses is rankings—notably, the QS or Times Higher Education World Rankings—yet none of these metrics actually measures education or teaching. The other main source of information about universities is the National Student Survey (NSS), run annually since 2005 by the national student union and based on an online questionnaire administered to final year students. This is based on 22 ‘attitude’ questions about the ‘learning experience’ and includes measures for teaching, assessment, personal development, academic support, learning resources, organization and management, and overall satisfaction. As in the United States, university managers take enormous pride in positive results and use these in profiling and promoting their institutions to prospective students. However, a recent critical report by Ipsos MORI found major flaws in the reliability of these data (Jump 2014). This was attributed in part to students filling out their questionnaire as quickly as possible and ticking ‘yes’ to everything (the average time was five and a half minutes, but 20% completed it in under two minutes), but also to the fact that students have a ‘vested interest’ in the ‘over-zealous promotion’ of their institutions (Havergal 2015a). This report concluded that since the NSS scores are ‘likely to benefit both students and institutions themselves’, there may ‘be some incentive on the part of both to encourage or give positive ratings’ (Havergal 2015a). UK universities are not alone in mobilising students to enhance their ratings: the University of Auckland in one of its advertising poster slogans proclaims: ‘Let our reputation build yours!’

A key problem for governments is that there are few reliable metrics for evaluating education or teaching. In response to this, in 2017 the UK government introduced the ‘Teaching Evaluation Framework’ (TEF) to help students ‘drive’ the system and allow the top-tiered universities to increase their fees. The hunt for a suitable concept and method to evaluate teaching led some to look at the US Collegial Learning Assessment system which aims to test and measure student ‘learning gain’ over the period of their study. The problem is that while these tests purport to be a neutral measure of generic skills (e.g. problem solving, interpersonal communication, use of digital information, dealing with complex situations), ‘the contents of a test will be far more closely related to some subjects … than others’ (Wolf, cited in Havergal 2015b, p. 21). The UK government also decided to include ‘employability’ as a metric to evaluate teaching excellence. They use the Destinations of Learners in Higher Education (DLHE) survey to measure the proportion of students who are in highly skilled employment or further study 6 months after graduation (Blyth and Cleminson 2016). A more recent proposal from the newly created Office for Students is to use data from Her Majesty’s Revenue and Customs office to calculate this (OfS 2018, pp. 4, 17). The capacity of universities to embed ‘employability’ and the ability of students to gain meaningful employment will be measured by financial earnings and tax returns, reinforcing the neoliberal assumption that the value of a university degree must be financialised and measured in terms of its return on investment.

Auditing Research Excellence: The Managerial Uses of Pseudo-scientific Measures

These managers worry me. Too many are modest achievers, retired from their own studies, intoxicated with jargon, delusional about corporate status and forever banging the metrics gong. Crucially, they don’t lead by example (Bignell, cited in Colquhoun 2012a).

University research is another area that has been subjected to repeated attempts to measure the quality of academic work. Since the 1980s, there has been an explosion of national research evaluation exercises aimed at improving performance, output and competitivity among individual researchers, their departments, their institutions and even entire countries. The UK’s Research Assessment Exercise (RAE) was one of the first of such exercises, introduced in 1984 as part of a package of neoliberal reforms developed by the Conservative government of Mrs. Thatcher. The RAE (subsequently rebranded the ‘Research Excellence Framework’ or REF) is an intensive research evaluation exercise conducted every 4–6 years that measures and competitively ranks the research outputs of university departments across the UK. While the evaluations are based on peer-review, the academic community has no influence over the resulting allocation of resources, which are in the hands of the ministry.

There are four points of significance about this process. First, each academic only has to submit up to five pieces of work produced during the assessment period. This limit is intended to emphasise quality and to deter salami-slicing and rushing to press. Second, evaluations are based on panels of experts in each field who are expected to read the books, articles and scholarly publications or creative works submitted. The 2012 REF Guidelines stated explicitly that ‘No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing quality of research outputs’. Third, these research assessment exercises have been used to stratify the higher education sector. Successive exercises have been used to concentrate research funding in ever-fewer institutions and departments. This strongly incentivizes university leaders to maximize their REF scores by making ‘strategic decisions’ about where to invest and which subject areas or departments to close. It also incentivizes academics to publish at any cost as failure to be classified as ‘research active’ and meet the required performance target may result in ‘demotion’ to teaching-only contracts and the end of their research career (despite claims by university senior managers that the RAE or REF process has no bearing on HR processes or academic employment matters). Everyone in the university therefore learns what ‘counts’ and is pressed to re-orientate their energies accordingly in a process we might call the systemic RAE-fication of academia (Loftus 2006: Shore 2008: 290–91; Lucas 2017: 216). Fourth, and unsurprisingly, national reviews have revealed massive gaming of the RAE system as academics and managers seek to play the system (Lucas 2006; Wright 2009).

Universities have developed strategic plans to climb up the ranking ladder that now employ ever-greater expectations of each individual academic. For example, Queen Mary University of London was ranked 48 in the RAE 2001 and made an astounding leap to 13th place in RAE 2008. The leadership then devised a strategy to elevate the university into the top five UK universities by RAE 2015. In 2012, the university produced a table of its expectations for academic performance over four criteria: the quantity of papers published; the quality of journals where papers are published (the proxy measure is journal impact factor); total research income; and research income as ‘Principal Investigator’ (PI). Furthermore, these criteria were applied retrospectively to assess the performance of academics over the period 2008–2011. To keep their jobs, academics at Queen Mary had to meet the minimum threshold in three out of four categories. For a lecturer, that included 5 papers, one ‘quality journal’ paper; $200,000 in research income and at least half of that as the PI. For a professor, the expectations were 11 papers, 2 in top journals, $400,000 in research income of which at least half as PI.

As critics have noted, as well as being unattainable for many academics, Queen Mary’s yardsticks were ‘utterly brainless’ (Colquhoun 2012a). As Sir David Colquhoun (a professor of pharmacology, member of the Royal Society, and honorary director of the Wellcome Trust) noted, mass-producing articles is discouraged because it either results in publishing data in multiple fragments, or in appending a senior researcher’s name to somebody else’s work, often without properly reading or checking the data. ‘Such numbers can be reached only by unethical behaviour’, and ‘the rules provide an active encouragement to dishonesty’ (Colquhoun 2012a). There are many Nobel Prize winners (including Andrew Huxley, Bernard Katz, Bert Sakmann and Peter Higgs) who published very few papers in their lifetime and who would have doubtless been fired on these grounds.

The university’s criteria defined high impact journals as those that have an impact factor greater than 7. However, as Colquhoun notes, for some disciplines the highest ranked journals have impact factors of only 4 or 5, while in others, the top journals only publish review papers, not original research. Moreover, the number of citations that a paper receives bears no relation to the impact factor of the journal (Seglen 1997). Colquhoun (2012a) quotes an analysis of the journalNature that found the mean number of citations for a paper was 114 but, whereas one paper had 2364 citations, 35 other papers had 10 or fewer. Similarly, a study in 2001 of the citations accrued by the 858 papers published in Nature in 1999 found only 80 of them (16%) accounted for half of all the citations (Colquhoun 2012a). In addition to these faulty yardsticks, every academic at Queen Mary had to produce at least one PhD student in the assessment period. Given the state of the employment market and lack of jobs for such graduates, the ethics of expanding research by increasing numbers of doctorates simply to increase a university’s league table standing is highly questionable.

The use of such spurious metrics to evaluate scientists was criticized publicly by several scholars, including two from the institution itself. In a letter published in the Lancet, two biologists, John Allen and Fanis Missirlis, criticized the way the criteria had been applied to the School of Medicine and Dentistry (where 29 academics were facing dismissal for not meeting the performance criteria). They made four important points. First, these targets often hit the wrong people because the Head of School and Human Resources relied on cold, abstracted metrics rather than an understanding of the quality of an individuals’ research or potential. Second, the manner in which this disciplining was conducted, where targeted victims have to justify their ‘retrospective crimes’ in an audience with the Head of School and Human Resources, was a punitive procedure that recalls the Spanish Inquisition or, to continue our analogy, Kafka’s officials who never explain the procedures or what the condemned person has been accused of. Third, the criteria fail to address the quality of science itself; as Allen and Missirlis (2012) note, ‘there are no boxes to tick for advances in knowledge and understanding—no metrics for science itself … [this] slaughter of the talented relies entirely on a carefully designed set of retrospective counts of the uncountable’. Finally, these performance criteria are rarely applied to the ‘Grand Inquisitors’ themselves who, as the authors note, would conspicuously fail by their own criteria—‘yet to question them is heresy’. That last statement proved prophetic as the authors of the Lancet letter were charged with misconduct and subsequently sacked. Their department was the second chosen for this treatment, having under-performed in the RAE 2008, and Missirlis was dismissed for not having met the criteria. Allen—a highly respected and productive professor who did meet the criteria—was initially sanctioned by having all of his specialist teaching taken away and being required to teach service courses instead. When he indicated his unwillingness to accept this punishment, he was sacked for ‘refusing to obey a reasonable management instruction’ (Jump 2015b). He subsequently moved to University College London, but without a lab.

What is interesting in this and many other cases where performance measures are turned into managerialist tools for ranking, disciplining and firing staff is the pseudo-scientific language that is used to justify such decisions. In response to Colquhoun’s criticisms, the Vice Chancellor of Queen Mary University (QM), Professor Simon Gaskell, wrote a letter to The Times arguing that as QM was ranked in the top dozen research universities in the UK, these actions were necessary to address areas where ‘performance does not match expectations’ so as ‘to ensure that our students receive the finest research-led education’ and ‘to safeguard QM’s financial stability’. Management had ‘applied objective criteria to the assessment of individual academic performance based on generally recognized academic expectations’, and now he would invest to rebuild those areas where staff had been fired (Gaskell 2012). This discourse combines several threads: the imperative to ‘safeguard’ the university’s financial future by raising its rankings; an ethical obligation to defend its students’ interests; and the application of strictly ‘objective’ and impartial criteria based on ‘recognized’ and commonly accepted expectations of academic performance.

In fact, none of these claims are true, as Colquhoun notes in his rejoinder (2012b). The number of publications demanded of QM academics was far beyond what the RAE required, and staff who produced large numbers of publications were unlikely to have the time or inclination to teach students as well. To improve its standing in the REF, QM’s leadership deployed methods that had been explicitly ruled inadmissible in the REF guidelines. When evaluating the research output of individuals, management assumed that research was the primary activity of an academic, whereas Missirlis was shouldering high teaching loads. As in Marina Warner’s case, this highlights the Kafkaesque way in which the orientation of an institution changes, jibbing and tacking to follow shifts in government funding. This creates a volatile environment in which, when teaching funding is stable, managers focus primarily on pursuing variable funding from research, but when teaching funding follows students, the focus suddenly becomes ‘teaching, teaching, teaching’.

Effects of Indicators and Rankings on Academia

The question posed at the outset was how should we theorise these trends in higher education, and what effects is this quest for world class status though a proliferation of performance targets, indicators and rankings having on academics and on universities? Do they actually deliver the better outcomes and organizational transparency that they proclaim? As the examples above illustrate, the REF system has perverse effects on the public university and corrodes its civic mission. Peter Scott (2013), professor of higher education and former editor of the Times Higher Education (THE) likens the REF to a monster: ‘a Minotaur that must be appeased by bloody sacrifices’. Like the Minotaur too, it occupies a place that is labyrinthine in its complexity that has consumed the professional lives of many of its victims. At Queen Mary University, the fate of Missirlis and Allen can be conceptualized as sacrificial offerings to the new regime of academic accountability; they were effectively ‘collateral damage’ in a system where institutions and individuals believed they had no real choice but to play this high-stakes game. Yet the overall result was a corruption of the university’s main purpose so that pursuing better REF grades rather than producing good science and scholarship becomes the ordering principle. As Scott (2013) puts it, ‘research is reduced to what counts for the REF’—and those aspects of academia that cannot be counted or rendered commensurable on numerical score sheets, by definition, do not “count”’. Reflecting on Warner’s experience, Meranze (2014) similarly concludes ‘the demands for scholarship were increasingly irrelevant for the funding of the university or for the allocation of resources within the university’. Rendering certain aspects of university life visible—and therefore more calculable and governable by senior managers and administrators—is a logical counterpart to the systematic downgrading or invisibilising of other areas of academic life (like scholarship for its own sake, critical research, unconventional yet inspirational teaching) that are inconsistent with the neoliberal and managerial vision of the competitive ‘world class’ university.

However, it would be misleading to conclude that the effects of these indicators and rankings are simply repressive or perverse: they are also performative and productive and, for senior administrators and managers at least, often extremely empowering. Indeed, one of the most important effects of this avalanche of indicators and rankings has been to reinforce a series of developments already underway as a result of the neoliberal reforms of higher education. The first of these was to recast universities as transnational business corporations operating in a competitive global market. This development has been particularly evident since the 1980s in English speaking countries such as the UK, Australia, Canada and New Zealand, but also increasingly in many European countries. A second development was the withdrawal of public funding across the sector and the encouragement of universities to pursue alternative revenue streams, particularly from the private sector. Managers have financialised and marketized the university throughout its operations as it has increasingly come to resemble a for-profit organization. A third development is the shift in power from academics towards senior administrators and managers who increasingly arrogate to themselves the role of decision making, steering the enterprise and deciding on its policy priorities—even to the extent of claiming ownership of the university and referring to themselves as ‘the university’ (Shore and Taitz 2012; Ørberg 2007).

Indicators and rankings have thus helped to establish a new regime of governance and authority, one that equates the role of a vice chancellor with that of a private company’s CEO, with corresponding executive salaries and privileges. They have also reinforced the new hierarchies and cleavages that have come to characterise the neoliberal university, particularly the division between a new class of professional administrators (the ‘administariat’) and the burgeoning ranks of increasingly de-professionalised and casualised academic workforce (the ‘precariat’). One of the paradoxical effects of these changes is that while universities have been given greater institutional autonomy and ‘freedom’ to manage their own financial affairs and risks, they have also become increasingly dependent and vulnerable to market pressures and servile to government political agendas. Many university management teams have started to impose minimum expectations for research performance in their effort to improve their institution’s standing in the next research assessment exercise. In some instances, these performance targets have been pitched at such a high level that they are unachievable. At Newcastle University in 2013, for example, under the terms of a new management initiative called ‘Raising the Bar’, professors, readers and senior lecturers in the humanities and social sciences were expected to bring in at least £6000 to £12,000 a year in external grant revenue (for lecturers the required amount was £3000 to £6000 a year), as well as producing at least four 3* research outputs in the period before the next REF (Grove 2015). Even more unrealistic was the expectation that each academic should graduate one PhD student per year. Given the total number of PhD students, this target would have required Newcastle University to monopolise the entire supply of publicly funded PhDs in the UK (BBlaze 2015).

Academics rightly fear that these new targets could be used to make individuals redundant on capability grounds—which is undoubtedly part of the rationale behind the initiative and a logical consequence of failure to meet the targets. In 2019, there was a dispute at Liverpool University after the administration informed junior academics that they would not pass probation unless they published a paper that was ‘judged to be internationally excellent’ every 18 months. This level of output was far in excess of what the REF demanded and was accompanied by a new timetable policy which, staff claimed, cut research time, thus making these targets even more difficult to reach (Grove 2019). Similarly, at the University of Exeter, the probationary period for new lecturers in the social sciences has been increased to 5 years, during which time they are expected to have raised £100,000 in external grants (personal communication). A 2015 survey found that one in six universities in the UK had introduced individual performance targets for obtaining research grant money (Jump 2015a). As Grove (2015) notes, such funding income targets also represent a threat to academic freedom ‘as they would effectively govern the way academics approach their subject’, leading them to forego ‘blue skies’ research and pursue smaller, short-term ‘normal science’ projects to meet income targets (Wright 2009). In some universities, this process has been taken further with senior management and commercialisation units now deciding on academic appointments based on calculations of future research areas that promise the greatest financial returns to the university (Lewis and Shore 2017).

Conclusion: The Costs of Being ‘World Class’

Global ranking and the pursuit of ‘world class’ status are clearly having a transformative effect on universities. They have been catalysts in recasting academics as atomised individuals operating in a competitive higher education market: a de-professionalised workforce of researchers and teachers whose work must be incentivised, monitored and measured by management. They have also been influential in reshaping academic behaviour. Academics must also constantly measure their own performance in a labyrinthine system whose logic is often lost or meaningless for those at the academic chalk face. The university arms race for ‘world class’ status is conducted through auditing procedures which have departed from a search for probity and trust and deviated into calculations, proxy measures and rankings driven largely by financial bottom lines. As in the bureaucracy emanating from Kafka’s castle, the system is riddled with contradictory logics and perverse effects: it claims to be founded on economic rationality yet its consequences are profoundly irrational; it fetishises innovation and entrepreneurship and yet produces conformity, conservatism, and risk-aversion; it lionises competition, individualism and choice yet most of academia works through cooperation; and it now claims to put ‘the student experience’ first, yet the level of debt it produces has created an epidemic of student stress and mental health problems.

As Kafka’s protagonist Joseph K found, it is difficult to locate the author or agent behind the processes that created this system and futile to ask who (or what) is leading the incessant drive towards ever more coercive and calculative forms of measurement and control. The process has gone feral and increasingly runs according to its own logic, feeding on the metricised and performative world it creates. It has also become so normalized that it is now part of the fabric of contemporary university life. Despite its evident flaws and shortcomings, the use of metricized performance targets, indicators and rankings appear to many as both unstoppable and impossible to oppose. However, like any regime of truth, they are in fact assemblages of diverse and contingent threads, held together in arbitrary webs of power which, when examined more closely, turn out to have little substance, although they have powerful effects. In this case, what these calculative practices and financialised targets are producing is a new kind of university regime, one increasingly orientated around neoliberal policy agendas, financial markets, and the priorities of a new class of senior administrators and managers.

How then are these disciplinary regimes of ranking and performance indicators changing institutional behaviour and transforming academic subjectivities, and at what cost? As our examples show, university management’s increasing reliance on instrumental and calculative performance measurement creates its own dynamic, one that further institutionalises the spread of audit culture. These performance indicators and targets are instrumental in producing calculable, accountable, ‘responsibilized’ and self-disciplined subjects: i.e. these are the qualities of the ‘ideal’ academic in the new managerially led and neoliberalised university (Dean 1999; Lund 2012). Yet this ideal is itself far from fixed or stable, always shifting according to the latest changes in priority or new calculations of what pays, and therefore what ‘counts’. The net result of these proliferating systems of performance measurement is a regime of governance structured around out-of-reach or impossible targets that can then be used to discipline and punish dissenters and laggards. For academics, these measuring and ranking systems generate a sense of permanent insecurity and the feeling that one can never quite do enough. Those anxieties, in turn, produce an increase in centralisation, loss of academic freedom, increasing workloads for academics, and all the associated health issues including depression and burnout that this creates.

Throughout this chapter, we have likened the regime of metricised performancemanagement in universities to the alienating and surreal world of Kafka’s castle, but how useful or appropriate is this analogy? Kafka’s novels typically depict nightmarish settings in which characters are crushed by blind authorities or systems that are incomprehensible and inscrutable. Their sense of reality begins to fall apart as they struggle to grasp their changed circumstances. Kafka’s best known work of fiction, The Trial, for example, portrays a world gone mad. As Ivana Edwards (1991:12) explains, the book ‘is about Joseph K., who, although in hot pursuit of the truth, is executed for an unnamed crime. Time and space are rearranged so they can work either for or against the protagonist; the horror of that world is that he never knows what is happening, or when.’ Many academics would no doubt recognise these elements of the Kafkaesque in their own workplaces. However, according to Edwards: ‘You don’t give up, you don’t lie down and die. What you do is struggle against this with all of your equipment, with whatever you have. But of course you don’t stand a chance. That’s Kafkaesque.’ In fact, The Trial ends with Joseph K voluntarily submitting to his accusers and being led away to his execution. But this need not be the outcome. In Marina Warner’s case, she managed to find a path that led her away from the castle. She gained a new position as professor of English and Creative Writing at Birkbeck, University of London and became a fellow of All Souls College in Oxford. In 2017, she was elected as the first ever woman president of the Royal Society of Literature. A high-profile resignation, it would seem, can have a resounding impact and is not necessarily the death of an academic career even in the Kafkaesque university.