By Marc Spooner
Every so often, one or another government branch or agency claims there is an educational crisis in Canadian post-secondary institutions (but never addresses perennial underfunding). This latest exercise in bureaucratic regime-building comes from Ontario and follows the United Kingdom’s lead in implementing the misguided Teaching Excellence Framework, an exercise similar to its Research Excellence Framework, but for measuring and rating university teaching.
Recently, media reported on a pair of studies carried out by Ontario’s Higher Education Quality Council (HEQCO), claiming that roughly one in four students are lacking adequate labour market skills such as literacy, numeracy and critical thinking. The findings are based on the results of a few online standardized tests created by the Organization for Economic Cooperation and Development (OECD) and Educational Testing Service (ETS).
Granted, these attention-grabbing studies typically find their way into national headlines in shock and awe fashion, though a closer look at the actual reports they are based on often reveals a litany of assumptions, limitations and methodological flaws.
Take for example HEQCO’s recent report Measuring Critical-Thinking Skills of Postsecondary Students; nowhere in this report on critical thinking do the authors actually provide a conceptual definition of critical thinking! The report simply states, “To measure critical thinking, PAWS (Postsecondary and Workplace Skills) uses the HEIghten Critical Thinking assessment — a 45-minute online test developed by the Educational Testing Service to measure the analytic and synthetic skills of college and university students.” Even the initial journal article from ETS examining how the tests were validated clearly states: “One thing evident to the HEIghten design team, after synthesising the abundant information from the existing literature, is that a clear operational definition is needed to provide transparency about the theoretical underpinnings of this assessment.”
By not providing a definition of critical thinking, readers are left with the discomforting impression that the HEQCO report writers looked around for the most convenient test of critical thinking and then made it their implied definition in a prime example of the test tail wagging the construct-definition dog.
Such audit procedures are part of an alarming and misguided global trend to deprofessionalise, surveil and control our academies and our scholarly work as state auditors, quality assessment bodies and/or university central administrations themselves are increasingly demanding data with which to judge individual, department, faculty and institutional-level performance under the guise of accountability. The problem is that, all too often, such judgements are made by people who lack the disciplinary expertise to make meaningfully qualified assessments, using highly deficient and misleading measures.
There is no shortage of studies on the manner in which so-called accountability frameworks and performance indicators pervert the very object they set out to measure through forced quantification and targeting. One need look no further than the mass high-stakes testing craze that has all but strangled sound pedagogy in so many public education districts within the United States and beyond.
To the degree that externally determined and tested criteria are imposed upon courses, programs and the university experience, traditional collegial authority and autonomy are undermined. How a few 45–90 minute, one-shot standardized tests developed by the OECD or ETS and promoted by HEQCO and others can be compared to any degree program’s existing course and program requirements — each determined and assessed by expert professionals and subject matter specialists — is a question that perplexes. A standard four-year undergraduate experience likely includes 20 to 40 expert “second-opinion” evaluations of student achievement and subject matter mastery as students progress through their programs and are taught by a wide variety of professors with a diversity of teaching styles. Why any unit or program would submit, or lend credence, to a one-size-fits-all, quantified, out-of-context test administered via computer at the behest of an external body with little or no disciplinary and program knowledge truly boggles the mind.
These online tests diminish and distort the post-secondary learning experience by reducing it to what can easily be quantified and measured by a computer. Just as distressing is how such technologies of governance are increasingly being proposed and deployed by governments and agencies to strategically and cynically challenge and undermine academic freedom and our traditional scholarly expertise.
We must stridently expose and denounce such assaults on higher education and reassert our professionalism by reclaiming and defending our traditional disciplinary and collegial authority to judge what is of quality, determine what is of importance and set out the manner in which it should be taught and assessed.
Marc Spooner is a professor of education at the University of Regina and co-editor, with James McNinch, of Dissident Knowledge in Higher Education.
The views expressed are those of the author and not necessarily CAUT.
CAUT welcomes articles between 800 and 1,500 words on contemporary issues directly related to post-secondary education. Articles should not deal with personal grievance cases nor with purely local issues. They should not be libellous or defamatory, abusive of individuals or groups, and should not make unsubstantiated allegations. They should be objective and on a political rather than a personal subject. Publication is at the sole discretion of CAUT. Commentary authors will be contacted only if their articles are accepted for publication. Commentary submissions should be sent to Liza Duhaime (email@example.com).