Niks meer missen?
Schrijf je in voor onze nieuwsbrief!
Foto: mohamed_hassan (cc, via Pixabay)
international

‘Oh No We Are Number One: On Being Ranked 1st In The World’

Mark Deuze,
13 maart 2018 - 11:48
Betreft
Deel op

Mark Deuze works for the best department of media studies in the world, according to Quacquarelli Symonds. That’s no reason to celebrate. ‘Rankings are the death of everything that is good and beautiful in the creative industry that academia is.’

“Communication Science & Media Studies ranked 1st in the world” headlined the press release of the University of Amsterdam on 28 February 2018, immediately followed with the curious qualifier that we are now “best in the world” in the latest annual ‘World University Rankings’ by the Quacquarelli Symonds (QS) company. As I am faculty in Media Studies and hold a PhD from Communication Science at the UvA, my first and foremost thought is: “Oh no, we are number one.”

 

Rankings are the death of everything that is good and beautiful in the creative industry that academia is.

 

In this essay, the problems of university rankings are addressed on three levels: first, regarding the proliferation of all kinds of metrics, rankings, and other performance audits dominating the way institutions of higher learning work today, as it results in an accountability paradox: the more all kinds of ranking systems are used to make universities accountable to the public and the state, the less accountability will actually occur. Second, we have to consider problematic issues regarding the history, organization and methodology of rankings such as the one provided by QS. Of particular concern here is the improbability that any of the 99% of universities around the world will ever make it into the list of ‘top’ ranked institutions. Third, one has to consider the consequences of being ranked — and especially of ending up high in the rankings. Specifically, we should be concerned about the institutional inclination toward reactivity — the idea that people change their behavior in reaction to being evaluated, observed, or measured — and the effect this has on academic freedom.

‘Instead of faculty and students determining the mission and course of the university, control subtly shifts to an ever-expanding body of bureaucrats’

The Accountability Paradox

Since the 1990s in particular, universities have become subject to new public management — an effort to make the administration of higher education more businesslike, aiming to improve its efficiency by using approaches from the private sector. Key instances of this trend are to see students as customers, degrees as products, and to introduce an ever-growing range of performance evaluations, audits, and achievement metrics in order to monitor the work that faculty do. Although most would agree that some kind of assessment and performance monitoring is useful, overall this system of management facilitated an on-going deprofessionalisation of those who do the teaching and research at the university.

 

Instead of faculty and students determining the mission and course of the university, control subtly shifts to an ever-expanding body of middle- and senior management, support staff, database and IT specialists, and other bureaucrats. Although these people work generally hard with the best of intentions, they are forced to prop up the bureaucratic system rather than serve the core activities of the university: teaching and research. As graduate director, I have witnessed up close the frustration this impossible position causes for everyone involved.

‘Every one can find a ranking somewhere that is somehow meaningful to whatever public relations purpose is bench-marked as important at that particular institution’

Rankings of universities are a good example of an easily quantifiable and quotable metric that makes instant sense to anyone who does not teach or conducts research (at an university) for a living. Rankings are part of a whole series of performance indicators common in today’s university — to name but a few: student enrollment, degrees awarded, number of publications, listings of top scholarly journals and publishers, average scores on student evaluations. The contemporary university can be expressed in so many numbers, it would seem we can now quickly and easily assess how it is doing. However, as Bruno Frey and Margit Osterloh write in their research on rankings:

 

instead of improving performance through accountability, too much energy and time is being consumed in reporting, negotiating, reframing, and presenting performance indicators, all of which distracts from the performance that is desired.’


As prospective students, politicians, public officials, administrators, journalists, and others increasingly rely on such rather useless numbers, an accountability paradox arises. Instead of a ranking making a Department (or, in our case two excellent Departments with vastly different profiles, activities, and cultures) more accountable to the outside world, such a seemingly clear-cut indicator makes us less accountable, because it obscures all the things we do, all the experiences students and faculty have, all the different ways in which learning and researching takes place.

An important point to all of this, is that the concerns about new public management are not exclusive to the university: all public sectors — such as primary and secondary education, healthcare, the military and the police — are impacted by this shift toward oversight, performance control and continuous assessment. Although filling out forms and reporting to bureaucratic entities are tedious and time-consuming, the real victims of all of this are the people we serve: students, patients, citizens.

 

We Are The 1%

Rankings are big business. In the last decade or so, numerous public institutions and private businesses have introduced their own ranking systems for universities around the world, as well as all kinds of nation-based rankings. The QS survey is but one among many rankings, and the list today includes the Academic Excellence Project (Russia), Academic Ranking of World Universities (Shanghai), BestCollege Hunt (India), Center for World University Rankings (Saudi Arabia), the Complete University Guide, CWTS Leiden Ranking (Netherlands), the Global Institutional Profiles Project, the IREG Observatory on Academic Ranking and Excellence, Meta University Ranking, Professional Ranking of World Universities (France), Webometrics’ Ranking Web of World Universities, and the Times Higher Education World University Rankings.

Foto: Klaartje Berkelmans

In other words: every one can find a ranking somewhere that is somehow meaningful to whatever public relations purpose is bench-marked as important at that particular institution. Still, most of these lists rank more or less the same 100 to 200 institutions in the top, year after year. That means that at least 99 percent of all universities around the world — according to the 2018 World Higher Education Database, maintained by the International Association of Universities, there exist well over 18.500 universities worldwide — are never to be found in any of these international rankings.

 
Being ranked in the top (and being in the top 1% of all universities in the world is arguably the ‘top’) has a Matthew effect of accumulated advantage: once an institution is ranked there, it is more likely to be ranked again the next year. This in effect makes being ranked as ‘top’ profoundly sad, as it means being part of a deliberate exclusion mechanism that makes invisible the amazingly diverse work of educators and teachers (and students) from thousands of universities around the world.

‘For students, increasingly almost everything they do at the university becomes a gradable event, stifling the freedom to just learn, study, research, and explore based on curiosity, serendipity, or inspiration’

Of course, there are profound methodological problems of the QS rankings that have been addressed by others already. Such critiques highlight the problematic nature of the basis of the rankings: 40% of the ranking score is based on ‘reputation’ among peers, a highly contested way of assessing any kind of quality or diversity. Of particular interest to the argument here is the fact that QS re-models and transforms their formulas, models and approaches underlying their ranking system substantially every year (and so do the other ranking systems). This suggests there are rather significant methodological issues to be resolved every year. It also means that a year-to-year comparison of rankings becomes impossible, and therefore any position in such a list is rather meaningless.

 

Academic Freedom

Finally, perhaps the most worrying aspect of celebrating rankings and being ranked uncritically: a gradual loss of academic freedom in favor of ‘playing by the numbers’ and performing to the test: organizing the work and performance of a Department, School, or even an entire university so that it will do better according to whatever performance or accountability metric is thrown at it. For students, increasingly almost everything they do at the university becomes a gradable event, stifling the freedom to just learn, study, research, and explore based on curiosity, serendipity, or inspiration.

‘I say, wholeheartedly: fuck rankings’

As faculty, we are expected to ‘publish or perish’, putting out articles, chapters, conference presentations and all kind of other publications on a non-stop (and annually evaluated) basis that are counted, but generally never read, seen, or listened to — especially not by our students, let alone the general public. If we do not ‘score’ on measures rating and ranking our scholarly output, the number (and size) of research grants, and subsidies for creative work, our contracts expire, our tenure dissipates, our careers dwindle and therefore our voices get silenced.

 

No one should want to contribute to this, yet here we are: we are number one. What frightens me is what happens if next year we will be number two? Or number ten, or fifty? Does that then mean we are lazy? We are all of a sudden not ‘best’ or ‘top’ anymore? Rankings (and all the other performance metrics part of new public management) do not serve to recognize the richly diverse work that faculty do, their master is the bureaucratic system that solely functions to maintain itself. Whether we are first or last is meaningless to our research and teaching — but it becomes powerful when part of a managerial strategy predominantly interested in streamlining productivity, efficiency and controllability.

 

Conclusion

I do not need, no, I do not *want* some for-profit company to tell me that my colleagues at the University of Amsterdam are brilliant, passionate, hard working, intrinsically motivated and committed to their students and their research — I know. I know this goes for all the other communication science and media studies colleagues and units in my home country, as it does pretty much everywhere else, I’m sure.

 

What I also do not need is a celebration of such rankings at the expense of academic freedom, of inclusion and diversity, and of real accountability — which can be established through more careful and grounded measures, such as:

  • mentoring services (especially for junior colleagues);
  • peer intervision for faculty;
  • a universal basic research income instead of the current dysfunctional and wasteful grant circus;
  • periodic self-assessment;
  • qualitative instead of quantitative student evaluations.


Academic faculty — educators and researchers — are like any professional in the creative industries: what we need in order to do our very best work is autonomy and recognition. Those two things are exactly what gets whisked away in contemporary higher education in its fetishisation of performance metrics, audits and assessments.

 

That is why I say, wholeheartedly: fuck rankings.

Lees meer over