Education

Report slams ‘one-dimensional and flawed’ international university rankings

Merton College Library, Oxford University

The damning report, International university rankings: For good or ill?, identifies significant flaws in the data used to compile international rankings such as the THE World University Rankings, QS World University Rankings, Academic Ranking of World Universities (also known as the Shanghai Rankings) and U-Multirank that it says undermines their validity.

“How, other than through knowledge of research articles, is an academic in Belgium likely to be aware of a university in Australia?”

Firstly, there is no way to ensure data is gathered to comparable standards, the research observes: even definitions of full-time members of staff and students vary from country to country.

On top of this, universities supply their own data and there is “no effective effort” on the part of compilers to verify the data, the report asserts, which can result in significant errors.

The report focuses heavily on an additional flaw in the way the three most prominent rankings – THE, QS and ARWU – are compiled, saying their focus on research makes them “essentially unidimensional”.

Despite their purported use of a range of criteria, including teaching, international outlook and academic reputation based on peer reviews, the study argues that in reality, most metrics are essentially measures of research performance and activity.

“How, other than through knowledge of research articles, conference presentations, historical prestige and so on, is an academic in, say, Belgium likely to be aware of a university in Australia?” it challenges.

“They are certainly most unlikely to know anything about the quality of the teaching or outreach, which may be outstanding.”

Measurements of employer opinions on the quality of universities are also flawed, it contends: “It seems far-fetched to expect an employer in Belgium to provide a view about the quality of graduates of an Australian university.”

Even staff-to-student ratios are not necessarily an indicator of teaching quality, the study adds. For example, institutions with a lot of research staff will score more highly on this indicator, even if they don’t teach.

The study identifies the ratio of international to domestic students as the only factor that can “reasonably be claimed to be a factor independent of research” – but adds that this is partly dependent on migration policies, and generally accounts for a tiny proportion of a university’s score.

Therefore, the study argues, the only way a university can improve its standing in international rankings is by improving its research performance, which can often come at the expense of a focus on teaching, widening participation and outreach.

This imbalance can even skew government policy, the study notes, to essentially funnel money that could be used elsewhere in the system into boosting research metrics in order to improve higher education system’s perceived international standing. France, Germany, Russia, China and Japan have all placed improving rankings performance on their lists of internationalisation goals.

“Our weightings have been developed in consultation with universities, governments and academics and are uniquely valid”

“We have followed the evidence to its conclusion and show that international rankings are one-dimensional, measuring research activity to the exclusion of almost everything else,” the report’s author, HEPI president Bahram Bekhradnia, said.

“Indeed, what is arguably their most important activity – educating students – is omitted.”

The report looks more favourably upon the most recent addition to the crowded global league table scene, U-Multirank, in this area, but stresses that it is nevertheless “beset by other problems”.

‘We are convinced that in U-Multirank we have solved all the problems identified in the HEPI report. As a matter of fact, this is why we developed U-Multirank in the first place,” Frans van Vught, project leader, U-Multirank, told The PIE News.

“Conceptually and methodologically, U-Multirank is an alternative and better approach than the traditional league table rankings,” he added.

However, the report says U-Multirank is extremely partial as many universities refuse to provide the necessary data.

Responding to the HEPI report in a statement published in University World News, THE Rankings editor Phil Baty said he welcomes elements of the constructive criticism in the report but took issue with the “fundamental understanding” that THE produces a single ranking of universities from best to worst, arguing that THE publishes a variety of rankings highlighting specific metrics.

“Our weightings have been developed in consultation with universities, governments and academics over a decade or more and, consequently, our rankings are uniquely valid,” he added.

THE is also in the process of developing a methodology for a teaching-focused ranking of Japanese universities and considers this an area for growth in future, Baty said.

 

 

[Source:- Pienews]