Ranking methodology: How are universities ranked?

Very different ranking approaches

There are major differences in the ranking methodologies used. These differences can be seen in the definitions of what constitutes quality, in the criteria and indicators used to measure quality, in the measurement processes as well as in the presentation format. These differences result in very different ranking approaches. Consequently, the ranking results differ considerably from one ranking approach to another.

League tables are achieved by weighted indicators

Media rankings are designed to produce a league table in which each university is assigned a specific rank. Higher ranks indicate higher quality, lower ranks indicate lower quality. How is this achieved? It requires a specific definition of what constitutes quality in a university. On the basis of this definition, quality criteria and indicators are used to assess the universities. In order to calculate an overall score, each indicator has to be given a particular weight. Quality criteria, such as research impact or teaching quality in the case of THES, need to be measured by specific indicators like the number of citations per faculty in the Thompson Scientific Database or by the student/faculty ratio. The indicators in this case are given an equal weight of 20%. This approach then needs to be applied equally to all universities.

What is the problem with such an approach?

Ranking lists all use different definitions of university quality, different criteria and indicators to measure quality and different weightings for each indicator. For this reason, the ranking results are also very different. The ranking results cannot be explained in a sensible way without knowing what was measured and what the measurement process looks like. There is a major difference between using the number of university alumni who earned Nobel prizes as an education quality indicator (as the Shanghai ranking list does) and using the student/faculty ratio (as the THES ranking list does). And the results will look different if you give research output a weight of 20% (as the THES ranking list does) or 40% (as the Shanghai ranking list does). In addition, the definitions of quality and the measurements used are determined by ranking list organisers. In the case of media ranking lists, this means the media themselves. Often it is unclear why a particular definition was chosen, how well it is founded, by whom it was decided and how open and reflective the decision process was. And yet, such ranking lists have considerable influence when used to measure the quality of universities.

The type and quality of data used are also very different

The Shanghai ranking list uses objective data that can be measured quantitatively. For its part, the THES ranking list relies heavily on subjective evaluations by experts. How valid the latter are and how well they represent the institutions covered are important questions that are left unanswered. Quality is defined on the basis of subjective criteria applied to all universities, regardless of what their mission and goals may be. The results are then presented in a league table, suggesting a high degree of measurement precision. Such precision, however, cannot realistically be achieved and should not even be implied in the first place. The promise of measuring university quality adequately and precisely across very diverse institutions and for different stakeholders is simply unrealistic.

Basic principles for better approaches

There are ranking approaches that are better suited to meeting the demands of the task. They are based on a number of basic principles such as:

  • a ranking of individual disciplines or departments instead of whole institutions

  • a multidimensional concept of university quality instead of a “one-size-fits-all” approach, taking into account the diversity of academic institutions, missions and goals as well as language and cultural specifics

  • a separate measurement and presentation of single indicators - that may be ranked separately - allowing for individual preferences ("my-ranking") instead of an overall score

  • a presentation of ranking results in rank groups (top, middle, bottom groups) instead of league tables

 

In an effort to address the many methodological problems of ranking lists, an International Ranking Expert Group was founded in 2004 by the UNESCO European Centre for Higher Education (UNESCO-CEPES) and the Institute for Higher Education Policy in Washington, D.C. They produced a very useful set of quality principles and good practices, the Berlin Principles on Ranking of Higher Education Institutions : http://www.che.de/­downloads/­Berlin_Principles_­IREG_534.pdf