Educación superior en la India (2)
Noviembre 14, 2018

 

November 13th, 2018 – Alex Usher

Yesterday , I discussed the need to change culture in Indian universities to make them a bit more focused on output and less focused on the employment privileges of their faculty. There is one trick the Modi government has used in this respect, the  National Institutional Ranking Framework (NIRF) . That’s right – in India, the government ranks its institutions. And not for funding purposes – just to rank them and give them a kick in the tail to pay attention to performance.
Some background is required here. For a very long time, no one really cared what institutions did. In fact, for decades institutions did not even bother reporting basic information like “enrolment” to the national regulator, the University Grants Commission (UGC). For quite some time, if you wanted to have a sense of the number of students in the country, or the balance between public and private university students, you had to wait every seven years for the National Sample Survey – which is somewhere between Canada’s General Social Survey and the American Community Survey – to do a module on education and extrapolate from there (the data has improved following the Human Resources Ministry taking over the data collection from UGC in 2010).
Around the turn of the decade, as it began to dawn on the country’s leaders that they were getting left in the dust on higher education, a new body (the National Assessment and Accreditation Council) came along to accredit universities. This accreditation did not take the form of a yes/no, pass/fail accreditation that we tend to see in the US or Europe. Instead, it took the form of  graded   accreditation, as one sees in places like Chile. In India, you could pass with one of seven grades between a C and A ++ (a D results in loss of accreditation). This is a “rating” rather than a “ranking” per se, but to a lot of people in India it looked pretty similar.
Then, the Modi government came along and claimed this was not good enough. First, accreditation was only five years, while ranking could in theory at least be done annually. And second, it was felt that grading 900-odd universities into just seven categories didn’t push institutions hard enough to compete with one another.
And so was born the NIRF. As rankings systems go, it’s pretty mundane: “Teaching and Learning Resources” count for 30% of the total, as do “Research and Professional Practice.” Graduation Outcomes make up 20% of the total, and “outreach and inclusivity” and “perception” make up 10% (the NIRF Methodology document is  here ). Or, at least, they would be mundane in most western systems of higher education, where institutions have some control over things like hiring, curriculum and tuition fees. As a way of measuring institutional performance in largely centralized systems like India’s, it’s borderline insane.
For instance, what on earth is the point of measuring expenditure per student, or faculty per student, when the UGC controls institutional budgets through its own entirely opaque funding formulas? By definition, the institutions that will get rewarded here are those that can play the UGC game well and those who can to some degree limit enrolment, neither of which is self-evidently beneficial to the nation. Outreach and Inclusivity is an odd metric when 49% of all seats are by law reserved for lower caste students. The graduation outcomes metrics are based on self-reported data which does not appear to be published anywhere on the NIRF site but which one can only imagine are a dog’s breakfast in terms of data quality. And the “perception” metric is a straight-up prestige index based on a government-funded survey of employers, academics and the general public. This sort of makes sense in a private ranking (which was clearly the inspiration for these measures), but the main response this index will inspire is greater expenditures on advertising and marketing, which is perhaps not behaviour one wishes to encourage.
It’s not that the inspiration behind NIRF is a bad one. Lord knows, the Indian university system is badly in need of more performance-orientation.  But I think there are two issues here. The narrow one is that a lot of the indicators in use here either use dodgy data or don’t reflect on institutional  performance very much. I think that’s mainly because there was a desire to emulate some western ranking systems without necessarily considering the contextual differences. One thing we’ve seen in the data so far is that there actually aren’t huge differences between institutions on most of these measures, which leads to some  ludicrously large swings  in rankings on a yearly basis. That’s not the end of the world: in theory, the definition and selection of indicators could get better over time, in which case problems to date can simply be seen as teething troubles.
The larger issue I think is whether you think rankings can actually act as an agent of change. Conceivably, it works if institutional leadership a) sees actual rewards from better rankings and b) has policy levers it can use to incentivize staff in various ways. But, since for the moment neither of these propositions is true, it’s hard to see what Theory of Change exists which rankings respond to.
A much more promising angle is to start granting institutions more operational autonomy. Some  tentative steps are being taken in that direction , but it will take a few years to see how that shakes out.  Stay tuned.

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This