1 Introduction

Public trust in universities appears to be decreasing. In this age of “fake news” and even “fake science”, the esteem of academic institutions is diminishing. In the eyes of the general public, universities may still be respectable institutions, but they are also seen to be relatively self-centred and to have an insatiable hunger for (public) resources. Furthermore, doubts are being raised about the self-organising capacities of autonomous academic institutions to assure and protect the quality, relevance and efficiency of their activities. Stakeholders ask for more information about costs and benefits. And for greater accountability.

There are several reasons underlying this growing demand for information and accountability. First, the financial contributions made by students, taxpayers and others to higher education are rising. Second, the increasing number and variety of providers of higher education and the (degree and non-degree) programmes they offer makes it increasingly difficult for (prospective) students to decide where and what to study. Similarly, employers and governments wish to be assured that higher education providers deliver the quality education and research services that are needed for their labour markets, their businesses, and their communities. Third, our society is increasingly characterised by mass individualisation, where the different clients of universities (in particular, their students) demand services that are customised to their needs, plans and abilities.

The result is an increasing demand for transparency tools: instruments that aim to provide information to stakeholders about the profiles and performances of universities. From the perspective of students, employers, public authorities and the general public, the need for tools that provide better and broader use of information regarding the services and performances of universities is growing.

For more than three decades, several tools have been (re-)designed to increase the transparency of the activities and performances of universities across their different missions: education, research, knowledge transfer and community engagement. In this chapter, I will address two higher education transparency tools: accreditation and rankings. I will present these tools in a brief theoretical context and will argue that the need for transparency can be seen as a new challenge for universities and the IAU, but also as an opportunity to regain the public’s trust.

2 Information Asymmetry

The basic theoretical notion underlying the increasing interest in transparency in higher education stems from an (economic) understanding of higher education as an experience good. An experience good is a good or service whose quality can only be judged after consuming it. This contrasts with the textbook case of “search goods”, whose quality can be judged by consumers in advance. Experience goods are typically purchased based upon reputation and recommendation since physical examination of the good is of little use in evaluating its quality. It might even be argued that higher education is a credence good: a product whose utility consumers do not know even after consumption. Higher education being an experience or credence good emphasises the importance of trust.

From the perspective of the provider, academics may argue that they know better than any other stakeholder what it takes to deliver high-quality higher education; and surely, they have a case. At the same time, this view implicitly perpetuates—and justifies—information asymmetry between client and provider. According to the principal–agent theory, information asymmetry might tempt academics and universities not to maximise the quality of their educational services. For instance, universities might—and do—exploit information asymmetries to cross-subsidise research activity using resources intended for teaching.

In principal–agent theory, several policy tools are suggested to protect clients and society against the possible abuse of information asymmetries. All of these tools are designed to affect the behaviour of the providers of higher education and research. Influencing the behaviour of universities—by governments, independent agencies or by the providers themselves—may take different forms. It may involve regulation: rules on service quality, standards for teaching, qualifications frameworks, quality assurance requirements, or conditions imposed on providers. Secondly, (financial) incentives may be developed to reward desirable behaviour and sanction undesirable behaviour. Thirdly, influencing the behaviour of universities may aim to alleviate information asymmetry by focusing on the provision of information; this is the intention behind the use of transparency tools.

3 Accreditation

Accreditation is the most common form of external quality assurance in higher education. The distinguishing characteristic of accreditation is that external quality assessment leads to a summary judgment (pass/fail, or graded) that has consequences for the official status of the institution or programme. Often, accreditation is a condition for the recognition of degrees and their public funding. Accreditation is the simplest form that quality assurance can take. However, the transparency function of quality assurance appears to be only an additional aim—its primary aim is to assure that quality standards are met.

When accreditation and other forms of external quality assurance were introduced, their focus was on what higher education institutions were offering, measured by input indicators such as the numbers and qualifications of teaching staff, the size of libraries, or staff–student ratios. However, the relevance of input indicators for making the quality of the teaching and learning experience more transparent, or for demonstrating the quality of outputs (e.g. degree completions) and outcomes (e.g. graduate employment) was questioned.

Increasingly, therefore, accreditation standards first began to include measures of institutional educational performance, such as drop-out or time-to-degree indicators. More recently, the focus of accreditation has also emphasised achieved learning outcomes. The degree to which study programmes succeed in enabling students to learn what the programme curriculum intends is argued to present a more transparent, more pertinent, and more locally-differentiated picture of quality.

The emphasis on achieved learning outcomes redirects accreditation more towards the diversified information needs of stakeholders, i.e. more on higher education’s public value; in this way, it aims to enhance transparency. However, this is only the case if the assessment of learning outcomes is comparative in nature, preferably on an international scale, and the results are made public.

Admittedly, whether stakeholders are interested in measures of achieved learning outcomes is another matter. For instance, even if students behave as rationally as policy would have it, they would not only be interested in outcomes in the distant (uncertain) future but also in characteristics of the educational process and its context. Potential students (and others) are likely also to be interested in current students’ satisfaction with such factors, allowing them to benchmark satisfaction scores across different institutions and thus to make proxy assessments of programme quality. However, in accreditation systems, such information is often hard to find. Unlocking this information is one of the challenges in further redesigning accreditation mechanisms as stronger transparency tools.

4 Rankings

Whereas quality assurance and accreditation were introduced mainly on the initiative of governments, university rankings have appeared mostly through private (media) initiatives. Rankings emerged in reaction to the binary (pass/fail recognition) information resulting from accreditation. They intend to address a need for more fine-grained distinctions in a context where many institutions and programmes pass the basic accreditation threshold.

It is widely recognised that, although current global rankings are controversial, they are here to stay and that especially global university league tables have a considerable impact on decision-makers worldwide, including those in universities. Yet, major concerns persist about the rankings’ methodological underpinnings and their drive towards stratification rather than diversification.

The following sets of problems surrounding the familiar global rankings can be distinguished. First, traditional university rankings do not distinguish their various users’ different information needs but provide a single, fixed ranking for all. Second, they ignore intra-institutional diversity, presenting universities as a whole, while research and education are “produced” in faculties, hospitals, laboratories, etc., which each may exhibit quite different qualities. Third, rankings tend to use available information on a narrow set of dimensions only, overemphasising research. This suggests to lay users that more and more frequently cited research publications are an indication of high-quality educational programmes. Fourth, the bibliometric databases used for the underlying information on research output and impact on peer researchers mostly contain journal articles, while journal articles are a type of scientific communication that is relevant for many natural science and medical disciplines, but this is less so for fields such as engineering, humanities, law and social sciences. In addition, the journals included in these databases are mostly English-language journals, largely disregarding publications in other languages. Fifth, the diverse types of information and indicators that underlie the rankings are weighted by the ranking producers and consolidated into a single composite value for each university, usually presented in a league table with a ratio scale. This is done without any explicit—let alone empirically corroborated—theory on the relative importance and priorities of the indicators or with a sound methodological base for the league table scale.

Given these criticisms, some analysts (including this chapter’s author) have endeavoured to construct alternative rankings, and in recent years—partly due to these efforts—not only have innovative rankings appeared but also the methodology of traditional global rankings has improved: information on individual areas (fields, disciplines) have been added to the global rankings, and the dimensions of the data included have been broadened.

In particular, U-Multirank has addressed the shortcomings of the traditional global rankings. As a transparency tool, this ranking is very different from its competitors. Firstly, because U-Multirank has adopted a multi-dimensional view on university performance; when comparing universities, it provides information about the different activities the institution engages in: teaching and learning, research, knowledge transfer, international orientation and regional engagement. Secondly, U-Multirank invites its users to compare institutions with similar profiles, thus enabling comparisons of “like with like”, rather than “comparing apples with oranges”. Thirdly, U-Multirank is interactive and stakeholder-focused; it allows users to choose from a menu of performance indicators and to select indicators according to their own preferences. Fourthly, U-Multirank does not create league tables; it does not force its users to combine indicators into a weighted score or a numbered league table position. Fifth, U-Multirank allows universities to analyse and communicate their own specific “profiles” and hence to emphasise their individual strengths. Sixth, U-Multirank assigns scores on individual indicators using five broad performance groups (“very good” to “weak”) to compensate for the imperfect comparability of information. Finally, U-Multirank complements institutional information pertinent to the whole institution with a large set of disciplinary (field-based) performance profiles, focusing on particular academic disciplines or groups of programmes, using indicators specifically relevant to the different subjects.

In general, rankings provide information to the different stakeholders of universities. From this perspective, they can be seen as transparency tools. However, not all rankings are methodologically sufficiently developed to offer relevant and custom-made information and to assist clients and other stakeholders in making choices. As such, many global rankings are still relatively weak in their transparency function.

5 Conclusion

From the perspective of the need to increase the transparency of the performance of universities, the conclusions regarding the two transparency tools discussed are as follows.

Accreditation remains a crude transparency instrument, providing little information of value to clients beyond the basic though crucial protection against substandard provision. The refinement that stresses public value-oriented ideas, namely focusing accreditation on achieved learning outcomes, which would make accreditation more directly relevant to (prospective) students, cannot overcome this basic crudeness. Moreover, designing such apparently more relevant accreditation schemes remains a challenge, also given academics’ resistance to their intrusiveness and the effort needed to design and incorporate sensible indicators of learning outcomes.

Regarding rankings, some recent initiatives—in particular U-Multirank—appear to have been designed to overcome the drawbacks of traditional global rankings. The basic characteristics of U-Multirank empower stakeholders to compensate for their asymmetrical information position vis-à-vis higher education providers, while at the same time assisting these higher education providers in communicating their specific profiles. Multi-dimensional, user-driven rankings have the potential to function as rich transparency tools, as client-driven and diversity-oriented instruments. However, such a transparency tool is only as useful as the information it offers to users. Specifically, the underlying data on the higher education institutions’ value added in terms of education performance (e.g. learning outcomes, societal engagement of higher education institutions) needs further elaboration.

For the improvement of both accreditation and rankings, universities can play a major role. Both sets of transparency tools will profit from stronger commitment by universities, in making them better tools for stakeholders’ information needs. For the universities, these tools offer the possibility for stronger accountability and better public visibility.

This is where the IAU can play a major role. As a well-respected global association of universities, the IAU can take a leading role in assisting its members to show their profiles and communicate their specific strengths, while at the same time creating a more open and transparent attitude about their performances. Building such an open attitude may well be the best way to regain the public’s trust.