From Numbers to Merit: Escaping the Quantification Quagmire in Higher Education

Some time ago, HEC introduced quantification of research output, and made a certain number of publications necessary for hiring and promotions. At the time, it seemed like a good idea. Why? Because without any measurable standards for hiring and promotions, these decisions were being made subjectively. The feeling was that there is massive corruption in the process – people hire unqualified friends, relatives, sycophants, or party-members and pass over more highly qualified people. Quantification of research output, and objective criteria for hiring and promotions was supposed to prevent this from happening.

Firstly, I would like to note that things were not as bad as they were thought to be. It was widely believed that it was impossible to be hired or promoted without connections in the administration or other senior government officials. However, I was part of many hiring and promotion committees both within my own university and as an external invitee in others. I saw many candidates being hired and promoted purely on merits, without any connections. In fact, throughout my twenty years in senior positions in the Pakistan academia, I never saw a single case of atrocious violations of merit on my watch. I did hear about such cases, and learned of such events. But if people of integrity are present on the selection board, they can prevent this from happening. So the issue is more about creating integrity, and not about quantification. But that’s all auld lang syne. The new system, based on quantification, has produced a monster which could not be imagined at the time it was introduced.

The focus of the faculty shifted from quality to quantity. Instead of trying to publish in high quality journals, which requires more time and effort, faculty shifted to the lowest ranked journals in the categories defined by the HEC. Many unhealthy practices came into vogue. Each paper has multiple authors because a publication is counted for each one separately.  Recognizing the pressure on the faculty, many fake journals came up which would publish for money. Many universities launched journals, so as to allow their own faculty to self-publish. I have personally examined CVs where the professor in question had zero publications until a certain year and then over 100 publications within the next two years, and there are huge numbers of such cases. Many CVs are entirely populated with fake publications and joint papers, but even authors with real publications have fake papers on their CVs.   Many faculty cannot distinguish between fraud and legitimate journals, even if they want to do so.  It is horrifying to contemplate the enormous amount of time, effort and money spent on this meaningless pursuit of publication count using papers which add nothing at all to our knowledge. But far more disastrous than wasting time, this race has led to the rise of people who are incompetent to the top, and led to complete loss of ability to differentiate between good and bad research.

Before going on to suggest solutions, it is worth pausing briefly to point out the deeper source of the problem which is the positivist philosophy which emerged in early 20th Century. According to positivism, we can never have real knowledge of unobservable and qualitative phenomena unless we reduce them to observable and measurable manifestations. This is what led to attempt to measure things which are inherently unmeasurable, like intelligence, love, faith, and corruption. In Business Schools, this led to a popular motto “You cannot manage what you cannot measure”. However, there are many examples of the disaster which results from trying to measure what is not measurable. As an illustrative example, when police performance was measured by the number of challans, the police started handing out challans for everything, indiscriminately. This is an illustration of “Goodhart’s Law”: “When a measure becomes a target, it ceases to be a good measure”.

So, what should be done? I think it is important to start with the recognition that this system of quantifying research by counting papers has been an unmitigated disaster, and must be abolished as soon as possible. Traditional methods suffer from bias – for or against – created by personal relationships. There are a number of known methods for minimizing such biases. My favorite is the method for canonization of saints used by Catholics. One person should make as strong a case as possible for promotion, while the Devil’s Advocate makes as strong a case as possible for denial of promotion. After listening to both sides, let the decision be made by a panel of experts, chosen by methods which ensure neutrality, or at least, balanced representation of different perspectives. But, far more important, I think that we need to emphasize teaching skills far above research output, in evaluating merits of faculty. To assess teaching skills, student evaluations are not enough. Rather, we can judge the competence of the teacher by assessing whether or not the students have mastered the skills which they were supposed to learn.

To conclude, I believe it is time to call an emergency meeting of top educators in the country, to devise an alternative to the current system of evaluation of faculty, which is producing functional illiterates with Ph.D.s We have many well-qualified, sincere, and thoughtful senior academics available, and I am sure that they could create an alternative system on which consensus could be created, which would lead to far better outcomes.

 

Date posted: December 19, 2023
Author:
No Comments »
Categories: Blogs

Leave a Reply

Your email address will not be published. Required fields are marked *