"The rank of a university may vary from one ranking to another": an interview with D. Egret
Since the 2000s, agencies and universities have been publishing several world rankings of universities every year for students, parents and researchers. Daniel Egret, an astronomer emeritus at the Observatoire de Paris, is specialized in data and has investigated the algorithms used since 2009 2011, when he was President of the Observatoire de Paris. He is among the first French researchers who have deciphered the algorithms behind the Shanghai ranking. Today, he is in charge of referencing and ranking at ²ÝÁñÂÛ̳, and he is currently reviewing the criteria and methods used in the main world rankings of universities.
PSL: Could you remind us of the main international university rankings?
Daniel Egret: There are many international university rankings: one of the most famous and oldest is the Shanghai ranking (ARWU ranking, from Jiao Tong University, created in 2003). This academic ranking (produced in an academic environment) uses only publicly available data, such as the number of publications of the institution in a recent period, or the number of Nobel Prizes receipients and Fields medals winners. In the same category is the Leiden ranking (CWTS, University of Leiden), which analyzes in particular the most cited publications as indicators of excellence and reputation. Those rankings mainly attempt to assess the intensity and impact of research activity conducted in universities.
Other rankings are based on reputation surveys conducted among academics and/or entrepreneurs and attempt to use different data in the ranking to capture other angles of the academic activity: for example, the student-teacher ratio, or the share of international students and faculty. The best known and most widely publicized are the THE (Times Higher Education) ranking and the QS World University Ranking (published by Quacquarelli Symonds). Their methodologies (i.e. the choice of data used) are diverse and may change from one year to the next. An important part is allocated to reputation surveys, which are sometimes considered as factors of opacity and fragility. Because of frequent changes in methodology, the resulting rankings are less suitable for measuring changes over several years.
Other rankings, particularly in Asia, Russia, or the Middle East, have been deployed to highlight specific aspects of universities' missions.
In addition, many of the rankings also produce secondary rankings by subject area (Physics, Mathematics, etc.), or rankins that hightligt specific data used in their overall ranking.
PSL: How and on which criteria are these rankings carried out? ?
D.E.: Again, we have to observe two categories: university rankings based on publicly available data and the most frequently data apply to research publications and their impact (citation rates); and university rankings that require institutions to provide additional data (for example, information on the number of students and teachers, diplomas awarded, etc.). Rankings in the first category choose to rank the top 500 or 1000 universities according to their criteria. In the second category are the THE and QS rankings, or the European U-Multirank ranking. As a general rule, only those universities and schools that have submitted data appear in these rankings.
PSL: Between the QS, THE or US News, the rank held by a university can vary by more than 40. How can such differences be explained? Is one ranking more reliable than the others?
Each ranking uses different algorithms and weights. It can be compared to the results of a school exam, which can change according to the coefficient applied. Only the few students who are proficient in all subjects are always in the top position!
D.E.: That's right, the rank of a university can vary greatly from one ranking to another. This is the case for ²ÝÁñÂÛ̳, and others. The reason for this is simple: each ranking uses different algorithms and weights. It can be compared to the results of a school exam, which can change according to the coefficient applied to the math test compared to the language or sports tests: depending on the coefficients used, the 'strong in math' or the 'strong in languages' will be favoured. Only the few students who are proficient in all subjects are always in the top position!
Among universities, the best students are, for example, Harvard, Princeton, Cambridge andr Oxford, which are at the very top of each ranking, and which also happen to be the wealthiest universities. Then, depending on the weight given to a data related to the size of the institution (a gross number of research publications) or to a staff ratio (often better for smaller institutions), for example, a university may obtain a very different grades in each ranking.
Beside, the word "university" brings together in the rankings, institutions that have very different sizes and missions: on the one hand, a public service university that trains a generation of students by adapting to a labour pool, and on the other hand, a very selective school that teaches a small number of subjects: comparing them may simply not be relevant at all!
The date selected and their weighting (coefficient) may also sometimes favor institutions under one national university model over another.
To deal with such comparisons, several rankings, in particular the European U-Multirank ranking, offer the user to choose the different indicators that suits their own situation (for example: I am a student looking to train in robotics in a Scandinavian country: which are the best universities that will meet my expectations).
It should be noted that the issue of data reliability and consistency (e.g. across countries or across different university systems) generally requires high attention when used in details for the rankings. This is also true for data relating to research publications, insofar as the affiliation of researchers to an institution is not always clear: this is particularly true in France, where a significant part of research activity takes place in mixed units with several affiliations.
PSL: With the exception of the Shanghai ranking, the rankings seem to be mostly carried out by Anglo-Saxon agencies or newspapers. Is there a European ranking of universities and could we imagine one day a French ranking produced by the newspaper Le Monde or Le Figaro?
D.E.: The current mode of scientific publication and citation of research (which plays an important role in the data used for the rankings) is dominated by the publication of English-language articles in journals often historically linked to the Anglo-Saxon university model. It is not impossible that this situation will soon change with the strong rise of Chinese scientific publications.
In Europe, the desire to promote a European model has led to the creation of the U-Multirank ranking. However, as mentioned above, this tailor-made ranking is less suitable for widespread use and media coverage, and remains very little known by the general public.
The ranking of Leiden (Netherlands) mentioned above is also a European ranking.
As for the French newspapers, they are aware of the enthusiasm of readers for all kinds of ranking and regularly produce coverage of the main existing university rankings.