The MIT Press, 2016; 119 pp;
In Bibliometrics and Research Evaluation, Yves Gingras provides a densely compact and accessible account of bibliometrics, tracing the early applications of bibliometrics to the contemporary misapplication of metrics of research evaluation at universities. Gingras is a professor and Canada Research Chair in History and Sociology of Science at Université du Québec à Montréal and contends a mounting danger with these
tools being so poorly wielded as potential weapons in the hands of administrators and researchers alike.
Bibliometrics were initially used by librarians to separate journals that were frequently used from those that could be considered obsolete because they were rarely cited, and thus (re)move them to make space for recent issues, and later developed into tools for retrieving the burgeoning literature of science. From the outset of this book, Gingras illustrates how things have gone awry in the past few decades with these tools being misapplied to evaluating the performance of individual researchers and the quality of their research.
At a macro scale, Gingras demonstrates that the study of publications and citation patterns can yield insights on the global dynamics of science over time. However, these ill-defined quantitative indicators, especially when taken to the micro level and applied to individuals, often generate perverse and unintended effects on the direction of research.
In many ways, Gingras illustrates how the social sciences, and even arts and humanities, are following the sciences in publishing with the present-day mantra of evaluating scholarly output. Doubtless, these metrics are driving decisions on where and what to publish, and reshaping and even controlling and constraining scholarly discourse in many unexpected ways. As many conform to publish in journals with high impact factors, which are typically international in scope, many topics of local interest are neglected. Books become a much less important medium for publication, especially with journal citations counting for more and not lagging as much. (This work was originally published in French as Dérives de l’évaluation de la recherche and translated to English, but we find English has become the prevalent language surpassing and perhaps on the verge of supplanting all others.)
Gingras questions why researchers are allowing their research to be erroneously represented by the impact factor of the journals they publish in or by an H-index, which he compares to a broken thermometer, as it only goes up and never down, and does not provide a measure of quality independent of productivity, as he clearly illustrates. The attempt to be more objective has forced this metric tide where any number beats no number, and where we find many invalid indicators being applied to poorly articulated instruments, which run contrary to what was being proposed to be measured in the first place.
Gingras also argues that universities seem eager to let invalid indicators rank and diminish their contributions to society, as he discusses compilations of Maclean’s and the “Shanghai rankings,” as they’re called, and the exchange of highly cited researchers and similar questionable practices to garner more favourable numbers.
Given the trends in research evaluation, it would seem advisable for Gingras to have published this book as a series of articles initially in English in some major international journal. Despite being aware of how such scholarship would likely be evaluated, the author demonstrates tremendous academic leadership and quality in publishing this volume. This is a very short monograph, but of such critical importance that it should be acquired by every academic library. It is highly accessible and well documented with 20 pages of endnotes. It also should be required reading for those who need to be much more informed of the many abuses and disturbing trends in attempting to evaluate the quality of scholarship at universities … so everybody!
Michael Hohner is a librarian at the University of Winnipeg.