By James Compton
Excellence. It’s one of today’s favourite academic words. You won’t find a university president who doesn’t claim that their institution pursues and embodies excellence in teaching and research. And why not? Canadian universities have quality scholars teaching in all disciplines. And besides, it’s their job to promote their institution. Sadly, however, the phrase teaching excellence has become tightly wedded to the branding strategies of university administrations caught up in the competitive pursuit of students, now routinely treated as consumers.
Nowhere is this truer than in the use of student opinion surveys, often referred to as student evaluations of teaching (SET). These anonymous surveys are routinely conducted near the end of all university courses and used as a measure — among others — of a professor’s teaching and course delivery. Here’s the problem, though. SETs do not measure teaching effectiveness. This is the conclusion of a growing body of peer-reviewed research challenging the use of these instruments in hiring and promotion and tenure evaluations.
“SETs seem scientifically sound: the objective correlation of numerical data.” Wrote William Kaplan in his recent arbitration ruling involving Ryerson University and the Ryerson Faculty Association, “But upon careful examination, serious and inherent limitations in SETs become apparent.… The expert evidence … persuasively demonstrates that the most meaningful aspects of teaching performance and effectiveness cannot be assessed by SETs. Insofar as assessing teaching effectiveness is concerned — especially in the context of tenure and promotion — SETs are imperfect at best and downright biased and unreliable at worst.”
What SETs do measure, according to a recent study by Anne Boring, Kellie Ottoboni and Philip B. Stark, is a student’s gender bias. “SET are more strongly related to instructor’s perceived gender and to students’ grade expectations than they are to learning, as measured by performance on anonymously graded, uniform final exams,” they conclude.
If the evidence against using SETs as a measure of teaching effectiveness is overwhelming, why continue using them? The University of Southern California posed this very question and decided to stop their use in promotion and tenure decisions. Instead, after reviewing the research, they have adopted a peer-review model. Students will continue to be asked their opinions about their courses, but those responses will not be used in promotion and tenure deliberations.
A similar move was made at the University of Oregon this past spring. “Those scores have nothing to [do] with how much a student learns in a class,” said senate vice-president Bill Harbaugh. Speak-ing with the university’s student newspaper the Daily Emerald, Harbaugh said “that’s really problematic because our intent as [a] university is to try to get better at teaching.” These evidence-based decisions deserve to be applauded. This is how one should pursue teaching excellence — with genuine concern for student learning, as opposed to an anxiety with consumer satisfaction.
But what does excellence mean in the context of post-secondary education? It’s a highly fungible term and open to abuse. One suggestion comes from Peter Starr, dean of the College of Arts and Sciences at American University, who suggests we pursue something he calls “cognitive humility.” Starr recommends we strive to develop our students’ capacity to be wise. “In all of its manifestations, wisdom is not an end state so much as a process — not a body of knowledge but an approach to its acquisition; not a fixed corpus of moral and ethical answers but a deep-seated (and ever-renewed) engagement in ethical questioning.”
That’s the kind of excellence I can sign up for — a learning outcome that can’t be measured and represented numerically but which can help students and society cope with the social, cultural and political complexities facing us.