World-Class Russian Education!

The goal of Project 5-100 is to maximize the competitive position of a group of leading Russian universities in the global research and education market.

NEWS


Dynamics of universities from “excellence initiating” countries in world university rankings

October 8, 2014

In modern society knowledge is one of the most important elements of economic and social development. Many countries have been making a transition to the so-called “economy of knowledge” [1]. In such circumstances the role of higher education in general and of universities in particular is becoming more and more important. Universities have an opportunity to help the development of a global competitive economy by preparing highly-trained specialists, conducting cutting-edge research and “transferring knowledge” from the sphere of pure theory to the realm of industrial production. The economy of certain countries – Denmark and Switzerland, for example -- greatly depends on the activities of their universities.
It is natural that in the current situation the governments of many countries aim to improve their national quality of higher education. First of all, there is at stake the reputation of universities in the global market of higher education. The better the reputation of a university, the more opportunities it has to retain local and attract talented foreign lecturers, researchers and students, and the greater its capacity to make its own significant contribution to the economy and society of its country.

According to a famous expression, “Everyone wants a world-class university but no one knows how to get one” [2]. In modern society this status belongs to the universities which are among the top of the global ‘best university’ rankings. The “class” of certain universities and systems of higher education is usually reflected in the position of those universities in a meaningful ranking system.

Over the last twenty-five years many countries have made attempts to improve the quality of education and to enhance the performance of their universities by implementing various “Excellence initiatives”. In addition to bettering the quality of education and implementing noticeable changes at the national level, these initiatives have been needed in order to improve the positions of these national universities in world university rankings where it is possible to compare universities from different countries. An obvious advantage of the ratings is the fact that they are easy to use not only for specialists and researchers in the sphere of higher education, but for anyone who wishes to make a personal evaluation.

Of course, ratings are far from perfect. There is a lot of debate about the adequacy of the criteria and parameters or indicators used by university rankings. This has to do with the fact that ratings usually show only one aspect of university activities successfully – university research, for instance, while the educational and social missions of the same universities are barely covered. This is because it is easy to measure the results of research: for example, you can count the number of publications in prestigious science journals, the number of citations and references, the amount of financing received by the university from the transfer of technologies into production etc. However, even in this area there are a lot of questions. How can we compare the achievements of a large classical university and a small school with a narrow specialization? Is the number of Nobel Prize winners a good criterion for assessing the quality of university research?

If there are problems with such measurable indicators, what can we do about those reflecting the educational function of universities? Is the number of Nobel Prize winners enough to evaluate the quality of teaching? Can we rely on various academic or employer surveys considering the fact that their subjective opinions are affected by a university’s reputation which may have been established over a long period of time but might not be as good as it was in the past?

As far as the social mission of the universities is concerned, nowadays world university rankings have almost no indicators which would enable one to measure and assess it.
This leads to the fact that most ratings are either openly called “research university ratings” or are, in fact, exactly that.

There are three main global university rankings now:

ARWU –Academic Ranking of World Universities [3];
QS Rating – the Quacquarelli Symonds rating [4];
THE rating – rating of Times Higher Education weekly [5].

The ratings listed above all make attempts to rank universities by several indicators which reflect all key functions of a university in a modern society. Apart from these ratings there are numerous ranking systems but they all either base their conclusions on one parameter and ignore all others (the Spanish Rating Webometrix considers the universities’ online presence only; the Taiwan HEEACT – only research, the rating of the École des Mines de Paris is focused on the number of alumni holding executive managerial positions in the world’s leading companies) or borrow their data from the main global ranking systems.

Ratings show the dynamics of universities, thus making it possible to draw certain conclusions regarding the current condition and changes within the system of higher education in a given country. At the same time, a careful study of the ratings enables us to conclude that such dynamics should be analyzed starting from the third year of the rating’s release.

This is because rating compilers are still “polishing” their calculation methodology during the first two years of the rating’s life, determining the circle of regularly assessed universities and fixing the final number of positions/places in the rating. Thus, for example, THE had only 200 places when it was first compiled in 2010, later – 400 places; the QS rating ranked only 200 universities when it was first released in 2004, yet each subsequent year the number of rated universities increased – by 2014 the rating listed 863 universities. In view of this, it is hard to speak about specific significant dynamics in the position of a university because its fall or rise may be connected with the appearance of new universities which were not considered previously.

Overall, despite all the criticism of the rating systems, every university wants to get as high as possible in the ratings because this serves as a symbol of the university’s success and recognition as a world-class institution.

The specific features of the main world rankings are described below.

ARWU
The Academic Ranking of World Universities (ARWU) was first published in June 2003 by the Center for World-Class Universities (CWCU), Graduate School of Education (formerly the Institute of Higher Education) of Shanghai Jiao Tong University, China, and updated on an annual basis. It has become the first global university rating. The latest ARWU was released on 15 August 2014.

ARWU is a ranking system based on objective indicators. Data is collected from official sources and carefully processed. The role of subjective evaluations (all kinds of surveys) is minimal in ARWU. The main focus is on the university’s research activity. A university which has the best performance under a specific parameter, gets 100 rating, the evaluation of the remaining universities is calculated as a percentage in ratio to the leader. The rating uses six indicators [6]:

Alumni (weight -10%) – Alumni of an institution winning Nobel Prizes and Fields Medals;
Award (20%) – Staff of an institution winning Nobel Prizes and Fields Medals;
HiCi (20%) – Highly cited researchers in 21 broad subject categories;
N&S (20%) – Papers published in Nature and Science (For institutions specialized in humanities and social sciences N&S is not considered, and the weight of N&S is relocated to other indicators);
PUB (20%) – Papers indexed in Science Citation Index-expanded and Social Science Citation Index;
PCP (10%) – Per capita academic performance of an institution.
Alumni – The total number of the alumni of an institution winning Nobel Prizes and Fields Medals. Alumni are defined as those who obtain bachelor, Master's or doctoral degrees from the institution. Different weights are set according to the periods of obtaining degrees. The weight is 100% for alumni obtaining degrees in 2001-2010, 90% for alumni obtaining degrees in 1991–2000, 80% for alumni obtaining degrees in 1981–1990, and so on, and finally 10% for alumni obtaining degrees in 1911–1920. If a person obtains more than one degree from an institution, the institution is considered once only.

Award – Different weights are set according to the periods of winning the prizes. The weight is 100% for winners after 2011, 90% for winners in 2001–2010, 80% for winners in 1991–2000, 70% for winners in 1981–1990, and so on.

HiCi – The number of Highly Cited Researchers selected by Thomson Reuters.
N&S – for the 2014 rating, the system considers the number of papers published in Nature and Science between 2009 and 2013.
PUB – The total number of papers indexed in Science Citation Index-Expanded and Social Science Citation Index in 2013. Only publications of 'Article' type are considered while ‘Review` and `Letter` are not considered.
PCP – The weighted scores of the above five indicators divided by the number of full-time equivalent academic staff.
Alumni represent the quality of education, Award and HiCi – the quality of faculty, N&S and PUB – research output and PCP – per capita performance of the university.
ARWU is a very stable rating system. Harvard University has been a permanent leader in the rating. American and English universities dominate the rating, while the best university of continental Europe -ETH Zurich – is only in the 19th place in ARWU in 2014 and the best Asian university is in 21st place (University of Tokyo). A total of 1200 universities have been evaluated.

In conclusion, it is necessary to say that many resources [7] about the dynamics of the universities in ARWU contain mistakes and discrepancies. For example, it is indicated that there were 16 Australian universities in the 2013 rating, which is not true. In reality, there were only 13 universities. This mistake is most likely the result of manual processing of the data because an opportunity to process the data technically appeared only after 2005. In 2003–2004 it was necessary to manually select universities from a list of 500 universities identified only by their country flag. In 2003, in addition to the 13 Australian universities, 3 universities from New Zealand, whose flag is very similar to the Australian flag, got into the ARWU rating.
Overall, ARWU, having a stable methodology and relying on objective indicators, is the best rating where one can track the dynamics of the universities and make conclusions based on their tangible results.

THE Rating
Speaking of THE rating, it must be said that from 2004 until 2009 Times Higher Education published a joint rating with the consulting company Quacquarelli Symonds (THE-QS rating), and only in 2010 did the weekly magazine and QS cease their cooperation. Since 2010 the magazine has used Thomson Reuters for collection, processing and analysis of data. In addition, the methodology of the rating has been significantly re-worked. The new rating was released in autumn 2010 for the first time. Only those universities who were able to confirm the data available to the authors-compilers of the rating appeared in the ranking. Later the rating compilers apologized to other universities for not including them (University of Oslo) or for inclusion in a lower position than was deserved (two Australian universities). Besides, the following year the methodology of calculation was changed. For example, the “weight” of the indicator “Citations” was lowered from 32.5% to 30%.

THE employs 13 carefully calibrated performance indicators which all have a certain “weight” – percentage of the final score [8]:

Teaching: the learning environment (worth 30 per cent of the overall ranking score);
Research: volume, income and reputation (worth 30 per cent);
Citations: research influence (worth 30 per cent);
Industry income: innovation (worth 2.5 per cent);
International outlook: staff, students and research (worth 7.5 per cent).

“Teaching: the learning environment” includes the following indicators:

perceived prestige of institutions in both research and teaching (15%). There were just over 10,000 responses, statistically representative of global higher education's geographical and subject mix.
The teaching and learning category also employs a staff-to-student ratio (an institution's total student numbers) as a simple (and admittedly crude) proxy for teaching quality. This measure is worth 4.5 per cent of the overall ranking score;
The doctorate-to-bachelor's ratio is worth 2.25 per cent of the overall ranking score.

“Research: volume, income, reputation” is made up of 3 indicators:

The university's reputation for research excellence among its peers, based on the 10,000-plus responses to an annual academic reputation survey (18% – more than 10, 000 responses in 2014);
The university’s research income (6%), scaled against staff numbers and normalized [9] for purchasing-power parity;
Research productivity (6%) – research output scaled against staff numbers.

“Citations: research index” has only one indicator: citations (30%) – Weighted at 30 per cent of the overall score, it is the single most influential of the 13 indicators, and looks at the role of universities in spreading new knowledge and ideas. The data are fully normalized to reflect variations in citation volume between different subject areas. Any institution that publishes fewer than 200 papers a year is excluded.

“Industry income: innovation” – has only one indicator: income from innovation (2.5%) – A university's ability to help industry with innovations, inventions and consultancy (transfer of knowledge) scaled against staff numbers.

“International outlook: people, research” is made up of three indicators:
ratio of international to domestic students and is worth 2.5 per cent of the overall score;
2.5 per cent weighting for the ratio of international to domestic staff;
proportion of a university's total research journal publications that have at least one international co-author.

THE Rating was the first rating to note such an important university indicator as its income from research.

The rating is dominated by universities from English-speaking countries, primarily the USA and UK. The rating has been quite stable (since 2011): as a rule, a change of positions in any interval is never greater than 30 in a year. The very first interval between the top 200 universities in 2010 and the top 400 universities in 2011 was an exception. For example, no Israeli university made it into THE rating in 2010 while in the 2011 ranking there were 4 Israeli universities, including 2 in the top 200. In addition, several universities have significantly improved their positions in the 2014 ranking compared to the previous year. The most notable changes were the rise of the German Ludwig-Maximilians-Universität München to the 29th place from the 55th; Eberhard Karls Universität Tübingen to 113th from the 201-225 segment; Technische Universität Dresden to 135th place from the 251-275 segment; and the Korean Sungkyunkwan University (SKKU) to 148th place from the 201-225 interval.

Despite the fact that certain indicators of the rating are not based on subjective data – various surveys and questionnaires – THE ranking is an interesting system attempting to demonstrate the quality of a university’s activity as fully as possible.

The only drawback of the ranking is the fact that it does not have a very long history. It has only been around for 2 years which is not enough if one wants to have a clear picture of the changes in the universities.

QS Rating
First compiled in 2004 with a ranking of 200 universities. Since then, the number of places in the rating has constantly increased. In 2014 the rating included 863 universities while more than 3000 were analyzed. The methodology of the rating has also undergone certain changes: initially it used five simple indicators; a 10-year period was used to calculate citation. Later QS started calculating citation over a 5-year period and since 2008 – they have been using the Scopus database. Today, the rating is based on six indicators [10]:

Academic reputation (40% of the total score);
Employer reputation (10%);
Faculty/student ratio (20%);
Citations per faculty (20%);
International student ratio (5%);
International staff ratio (5%).

Academic reputation is measured using a global survey, in which academics are asked to identify the institutions where they believe the best work is currently taking place within their field of expertise. For the 2014/15 edition, the rankings draw on almost 63,700 responses from academics worldwide, collated over three years. The advantage of this indicator is that it gives a more equal weighting to different discipline areas than research citation counts. Whereas citation rates are far higher in subjects like biomedical sciences than they are in English literature, for example, the academic reputation survey weighs responses from academics in different fields.

The employer reputation indicator is also based on a global survey, taking in almost 28,800 responses for the 2014/15 edition. The survey asks employers to identify the universities they perceive as producing the best graduates. This indicator is unique among international university rankings.

The purpose of the employer survey is to give students a better sense of how universities are viewed in the job market. A higher weight is given to votes for universities that come from outside of their own country, so it’s especially useful in helping prospective students to identify universities with a reputation that extends beyond their national borders.

The faculty/student ratio is a simple measure of the number of academic staff employed relative to the number of students enrolled. In the absence of an international standard by which to measure teaching quality, it provides an insight into the universities that are best equipped to provide small class sizes and a good level of individual supervision. Research and teaching staff are not separated. Lab researchers, teaching masters and doctors, invited professors from other universities are not considered. Students include everyone studying for a BA, MA or equivalent programs, and postgraduate students. FTE (full-time equivalent) is used for this calculation. FTE equals the number of students in full-time training, plus one-third of the students in part-time programs. If a student is a double major: doing one degree full-time and the other part time, they are calculated both as a full-time and part-time student. If it is impossible to obtain such thoroughly differentiated data, the total number of students is taken for the calculation. A similar formula is used to calculate the number of faculty, including research staff: FTE equals the number of full-time staff plus one-third of part-time staff [11].

The citation per faculty indicator aims to assess universities’ research output. A ‘citation’ means a piece of research being cited (referred to) within another piece of research. Generally, the more often a piece of research is cited by others, the more influential it is. So the more highly cited research papers a university publishes, the stronger its research output is considered.
QS collects this information using Scopus, the world’s largest database of research abstracts and citations. The latest five complete years of data are used, and the total citation count is assessed in relation to the number of academic faculty members at the university, so that larger institutions don’t have an unfair advantage.

The last two indicators aim to assess how successful a university has been in attracting students and faculty members from other nations. This is based on the proportion of international students and faculty members in relation to overall numbers. Each of these contributes 5% to the overall ranking results. Students enrolled in short-term academic exchange programs are not considered in the calculation.

Traditionally, English and American universities rank in the top of the QS rating. ETH Zurich -the best university of continental Europe ranked 12th. The National University of Singapore – the best Asian university – holds the 22nd place in the QS rating.

The QS rating is the least objective. 50% of the total score depends on the opinion of various experts, which makes the rating quite unreliable. Besides, because of the constantly expanding number of positions in the rating it is hard to determine whether the position of a university changed because of its condition or because other universities disappeared from the rating. The QS rating is most prone to change. For example, the Hong Kong Baptist University appeared in the rating in 2009 in 307th place; a year later it was in 342nd and in 2011 – suddenly rose the 234th place and immediately after demonstrated a negative dynamic: 271st in 2012 and 288th in 2013. However, it must be said that it is easier to improve one’s position in the QS rating over a fairly short period of time than in other ratings. Some examples of this include the Danish Aalborg University and the Koren Sungkyunkwan University. Aalborg University appeared in the rating in 2009 in the 501-600 segment. A year later it was ranked in the 451-500 segment and in 2011 – was in the 362nd place. In 2012 the university was in 352nd place, in 2013- in 334th. In 2014 it went down to 363rd place. Sungkyunkwan University was only in the 520th place in 2007, in 2008 – in 370th, in 2009 – 357th, in 2010 – 343rd. Another sharp rise followed: in 2011 – 259th place, in 2012 – 179th place, in 2013 – 162nd place. Finally, in 2014 the university occupied the 140th place in the rating. It is impossible to rise this quickly in any other ranking system.

Summing up the description of the QS rating, it must be said that its compilers can be a bit careless. First of all, there is no data for 2010 on the official website (it is possible that this is due to the fact that QS was on its own for the first time in 2010 after its partnership with the Times Higher Education ended). Secondly, after the design of the website was updated in 2013 it became impossible to obtain information about the ratings from 2004, 2005 and 2006 (prior to the re-design it was possible to check the positions of a university in 2005 and 2006). Thirdly, at the end of 2012, university data for 2005 was exactly the same as the data for 2006, which did not always reflect the true situation. All of this leads to the need to search and check QS data for 2004, 2005, 2006 and 2010 in other sources, which is not always possible. Finally, in the profiles of various universities the position of the university in the 2008–2009 ratings in the segments from the 400th to the 600th place looks sometimes like 401-450 or 501-550 or 401-500 and 501+. Due to the fact that the 450-500 and 551-600 segments are absent, the tables showing the dynamics of the universities in the QS rating select the 401-500 and 501+ options.

Despite its long history, the QS rating does not paint a clear picture of the changes happening in the universities. This has to do with high subjectivity of indicators, with inclusion of more and more new universities into the evaluation each year, and with the expansion of the rating itself. Nevertheless, it is in this rating that a rapid rise of a university is most likely.

Excellence Initiatives
Strictly speaking, there is no single “Excellence Initiative” term in all countries. It is more of a name for a whole series of various projects and programs aimed at improving the quality of the universities and their international competitive standing. According to this approach, the Table shows a list of countries implementing such “excellence initiatives”. These countries were selected from various regions of the world in order to show different approaches in concrete, specific conditions of a certain country. Besides, it was important to select those countries whose universities are represented in world university ratings. For this reason, for example, Nigeria is not on the list as none of its universities are represented in any of the ratings.

[1] A type of economy where knowledge plays a major role and generation of knowledge is a source of growth
[2] Altbach Ph. G. The Costs and Benefits of World-Class Universities // Academe 90, (1, January-February 2004)
[3] http://www.shanghairanking.com
[4] http://www.topuniversities.com
[5] http://www.timeshighereducation.co.uk/world-university-rankings/#
[6] http://www.shanghairanking.com/ARWU-Methodology-2014.html
[7] See for example, Salmi, D., Frumin I., How governments achieve international competitiveness of universities: lessons for Russia// Matters of higher education, 2013, #1, page 41
[8] http://www.timeshighereducation.co.uk/world-university-rankings/2014-15/world-ranking/methodology
[9] The procedure of normalization is not open, so it is not possible to forecast the position of a university. Accordingly, for example, the recent analysis of ratings conducted by a Scandinavian research institute in the sphere of innovation, research and education (NIFU), which reviewed THE rating among others, resulted in the conclusion that ratings are subjective, are based on doubtful data and are useless as a source of information for planning of improvement activities. The data pertaining to academic surveys in THE (33% of the total score in the rating) are unavailable etc. See Myklebust J. P. Official Study slams university rankings as ‘useless’ // University World News, № 335 (http://www.universityworldnews.com/article.php?story=20140918170926438)
[10] http://www.topuniversities.com/university-rankings-articles/world-university-rankings/qs-world-unive...
[11] A. Arefiev. International ratings of higher education institutions: history and modern times// University rating measurement: international and Russian experience/ Edited by F. Sheregi and Al. Arefieva. Moscow: Center for Sociological Research, 2014, page 36 – 37
[12] Thai universities have never appeared in the ARWU while Malaysian universities have never been ranked by THE