Back to top

Lost in the metrics

Lost in the metrics

iStock.com / peterhowell

Metrics. Whether it’s the rankings of universities and colleges, or the bibliometric assessment of research impact, metrics of various types are being wielded by private companies, governments and administrations to measure the performance of institutions, departments and even individual faculty. Controversies over the use — and some say ‘abuse’ — of metrics is now prompting academic staff associations and their members to push back.

“It’s a huge mistake to adapt your institution to climb in those university league tables. It’s a mistake to join them instead of beating them because, frankly, only 15 per cent of students are influenced by that,” says Yves Gingras, professor and Canada Research Chair in History and Sociology of Science at Université du Québec à Montréal and author of Bibliometrics and Research Evaluation: Uses and Abuses. “When universities give in to those rankings, they lose their bearings; they turn their back on their mission and let other people impose values that are not acade­mic values.”

Gingras says universities should recognize rankings as a smokescreen and pull out. “When the University of Toronto decided not to participate in Macleans annual survey because it was taking up too much time and resources, it made a big difference in my province. Since then, it’s not that important for our universities. And the sooner we treat the Shanghai ranking the same way, the better.”

But he says he understands why some universities are obsessed by metrics. He points out that many international research projects are joint ventures, and that European and Asian universities often refuse to collaborate with researchers employed by a university that is not well placed in international rankings. “But let’s be honest, our main source of funding in Canada comes from the Tri-Councils and they don’t want to know if your department is in the top 100 in the world on that topic, they want your resume and the paperwork explaining your project.”

For administrators, the temptation is also strong to use metrics to evaluate individual performance. Last year, the use of metrics was one of the big issues that pushed members of the University of Manitoba Faculty Association to go on strike.

“In the end, we won-ish. Right now, the degree of legal protection in the collective agreement on the use of metrics is not that strong; we were able to include that the use of metrics cannot constitute the sole criteria for promotion or performance evaluation. But there is a process on the way that will determine if our win was strong or weak,” explains UMFA past president Mark Hudson.

Hudson says a joint committee, with three members nominated by the association and three nominated by the administration, will examine issues related to the collection and use of research metrics in evaluative processes. That committee is to produce a report of its findings and recommendations, and if the majority rule against the use of metrics, new language will be added to the collective agreement.

“The committee has to determine if there are substantive risks or not in using research-based metrics. But we already know — because there is research to prove it — that there are risks for gender-based and racial-based discrimination,” says Hudson. “And even if there is a single individual who ends up being discriminated against because of metrics, then the university should not be allowed to use metrics for the evaluation of their performance or for promotion.”

Academic staff in other jurisdictions have also had clashes with metrics. In New Zealand, the Performance-Based Research Fund was established in 2003, and used by the government to allocate research funding to post-secondary institutions. Metrics are used to assess individual researchers and to rank disciplines and universities.

Since the implementation of this system, things are deteriorating for academic staff, according to the Tertiary Education Union. “If we don’t set the boundaries, the government will continue to measure us against arbitrary metrics like progressions and completions — a move that degrades both the quality of our tertiary education in New Zealand and our profession,” says TEU president Sandra Grey.

Encouraged by the election of a new government, Grey is hoping metrics will be placed on the back burner. “We want the end of the obsession with narrow performance metrics so teaching, research and learning is understood as more than completions and international journal articles,” she says.

In the United Kingdom, the use of metrics has been debated for years. Since 2014, the Research Excellence Framework has assessed academic research using metrics. The government says the system “provides accountability for public investment in research and produces evidence of the benefits of this investment; provides benchmarking information and establishes reputational yardsticks, for use within the higher education sector and for public information; and informs the selective allocation of funding for research.”

According to a Times Higher Education study published in 2015, individual metrics-based targets of one form or another have been implemented at about one in six UK universities. “Metrics — numbers — give at least the impression of objectivity, and they have become increasingly important in the management and assessment of research ever since citation databases such as the Science Citation Index, Scopus and Google Scholar became available online in the early 2000s,” writes journalist Paul Jump.

David Robinson, executive director of CAUT, says the metrics-driven trend is worrisome. “Academics should not be assessed, managed, or controlled simply using quantitative metrics.”

He also notes that “performance metrics can especially disadvantage Indigenous scholars, members of equity groups, those in non-traditional career paths, as well as those who conduct unconventional research or use non-traditional research methods.”

Another metrics-related concern is that measuring research output neglects the diversity and totality of scholarly activity. For example, metrics are more easily deployed in the natural and biomedical sciences, where research is published in scholarly journals. In the humanities and social sciences, researchers lean toward publishing books, but books score lower — in terms of metrics — compared to journal citations.

Hudson agrees. “The issue is much larger than unions defending individual members. This kind of impact indicator advantages a certain kind of research over others, it privileges novel finding, because you have more chances of getting published. It’s counter to the skeptic approach of science that is supposed to be present and to flourish in our universities.”

Robinson argues that peer review should remain the primary mechanism for assessment, and “metrics should not interfere with decisions on hiring, tenure and promotion, compensation, working conditions, or discipline.”

“It’s very dangerous to rely on metrics to evaluate the work of an individual,” adds Gingras. “If your research is very focused, your index is bound to be low. If you study the knot theory, as an example, there might only be 50 specialists like you in the world. The peer review process is still the way to go. Your peers don’t need your h-index to know the value of each scholarly publication, they know because they are in the field.

“If you want to use metrics, you have to use it as a ‘big picture’ tool and use a comparable sample. It’s a mistake to measure everything and then add everything to give a final score to a department or a university. It’s like adding apples with oranges and tomatoes. It makes no sense. But if you want to compare your astrophysics department with other departments to see how active your research is, it’s ok to measure the number of publications of your academic staff in existing databases that are relevant to astrophysics. Then, you compare apples with apples.”

Related

/sites/default/files/styles/responsive_low_constrict/public/presidentatcouncil.png?itok=YeBuCn62
January 2018

President’s message / The struggles for change in 2017

by James Compton “Plus ça change, plus c’est pareil — the more things change, the more they stay... Read more
/sites/default/files/styles/responsive_low_constrict/public/newsjanuary2018-unesco.png?itok=EDVPcPWK
January 2018

News / CAUT marks 20th anniversary of UNESCO recommendation

CAUT Council delegates celebrated the 20th anniversary of the UNESCO Recommendation Concerning... Read more
/sites/default/files/styles/responsive_low_constrict/public/newsjanuary2018-carletonu.png?itok=zuzmvRsI
January 2018

News / Carleton on probation

At its last meeting of 2017, CAUT Council considered a motion of censure against the... Read more