Paul A. Kirschner & Mirjam Neelen
Many schools involve their students in teacher ratings. The question is if that’s a good idea.
For both teacher appointment and evaluations of their functioning, subjective judgments play an increasing role. Is this good practice to have students rate their teachers, just like that? Are these evaluations as done by a teacher’s colleagues or students reliable? Recently, Bob Uttl and colleagues published a wonderful meta-analysis on this topic.
While many feel that students can reliably rate the quality of their teachers and that they should have a strong voice in things such as tenure, there are many studies that suggest that such ratings to evaluate teacher quality are NOT the way to go. In spite of this, for example, the National Action Committee for Students (LAKS; a Dutch advocacy group for secondary school students and their rights[1]) writes in their so-called “handbook” that student participation is important in the space of teacher appointment and teacher ratings. According to them, students should play a major role in these type of processes. LAKS states that by having students rate their teachers, schools are able to paint a clear picture about the quality of their teachers. One way to do this, in their opinion, is by sending out surveys to the students once a year so that they can give their opinions on how their teachers are doing. In order to be progressive (or possibly politically correct) many schools are following up on this and using their students’ opinions for hiring and teacher evaluation. This is rather strange because research has a lot to say about the (lack of) reliability or meaning of this type of ratings and has been saying it for decades.
A small but fine selection from most recent to oldest:
Bob Uttl, Carmela White, and Daniela Wong Gonzalez conclude in their meta-analysis in 2016 that there is no correlation whatsoever between student’s opinions of their teachers and what they actually learn (oh well, to be exact here the correlation was .03 which statistically equals just about ZERO). One would think that it’s somewhat important for a teacher to facilitate students’ learning somehow, no? The researchers’ state:
The entire notion that we could measure professors’ teaching effectiveness by simple ways such as asking students to answer a few questions about their perceptions of their course experiences, instructors’ knowledge, and the like seems unrealistic given well established findings from cognitive sciences such as strong associations between learning and individual differences including prior knowledge, intelligence, motivation, and interest (p 19).
They also conclude that students don’t learn more from teachers with high ratings (that is, the ones that are liked most by their students).
Twenty years earlier Anthony Krautman and William Sander studied the relationship between the learners’ grades and how the teachers were rated and found that ratings strongly correlate with the grade that the student expected to receive and/or received. These two researchers mention that teachers are basically “buying” more positive ratings through giving out better grades, which brings with it the risk that grade inflation is waiting in the wings.
Twenty four years earlier – we’re back in 1972 now – Miriam and Burton Rodin wrote about the relationship between subjective and objective measures for a good teacher in Science. The three teachers who received the lowest subjective (anonymous) ratings achieved the highest objective ratings (measures of learning outcomes). The researchers pose that perhaps the reason for this is that students don’t like teachers who make them work (too?) hard; that is harder than the students desire? Rodin2 conclude:
However, the present data indicate that students are less than perfect judges of teaching effectiveness if the latter is measured by how much they have learned. If how much students learn is considered to be a major component of good teaching, it must be concluded that good teaching is not validly measured by student evaluations in their current form (p. 9).
Last but not least, it appears that teacher ratings may strongly depend on the teacher’s gender! Various studies show that female teachers are generally rated lower than their male colleagues. In a recent study that was conducted by Lillian MacNell, Adam Driscoll, and Andrea Hunt in an online environment (and hence the teacher’s gender was not visible as such), teachers who were presented as male received better ratings than the ones who were presented as being female, even when in reality they were both women!
And this is not only the case in education. Fortune Magazine reports that gender also plays a role in the type of feedback an employee receives at review time. It reports, for example that while men are given constructive suggestions, women are given constructive suggestions – and told to pipe down. While a male is “assertive” or “busy”, the female is “bossy” and “selfish”. Also, women who are leaders are more likely to be described as “abrasive“, “aggressive“, and “emotional“.
If you’re up for some fun (and a little bit of a shock perhaps), you might want to visit Ben Schmidt’s blog where you can play with an interactive tool showing what words are used by college and university students on RateMyProfessor.com (>14 million ratings) to characterise male versus female teachers; also split per study discipline. While male teachers are described as “knowledgeable”, “brilliant” and “awesome”, women are described as “nice”, “beautiful”, “ugly” or “bossy” (see this article in the New York Times for a nice synopsis). Interestingly, “sexy” is mostly used for male teachers. Not sure what to think of that.
Perhaps the majority of schools who involve their students in teacher ratings should scratch themselves behind their ears and strongly reconsider this practice!
[1] One of the most well-known LAKS activities is their complaints service for the national final exams in the Netherlands (eindexamens). In 2016 they registered more than 200,000 complaints about the different A- and O-level exams while there were fewer than 100,000 students who took those exams!