When you measure what you are speaking about and express it in numbers, you know something about it, but when you cannot (or do not) measure it, when you cannot (or do not) express it in numbers, then your knowledge is of a meagre and unsatisfactory kind.
As part of the long, multi-person discussion going on about tenure and other aspects of academia (see John Bruce's blog and scroll down, and also see Jim Hu's comments), there has been some talk about the subject closest to my heart: teaching. John Bruce posts a few very negative reviews of faculty members from The Dartmouth Review and wonders how such people get hired given the current 'buyer's' job market. Jim Hu responds by examining what he can of one of the professors so harshly criticized, Fernando Commodari. Jim says that he takes such evaluations with a large grain of salt. I agree, except that I would add that you need a whole shaker.
Without a lot of context, student evaluations are probably meaningless--and I say this as one to whom student evals have been very, very good. For example, teachers in introductory classes get consistently lower numeric evaluations than they do in upper level courses. Science profs in particular get slammed on intro courses. There's also a pretty good correlation at the lower end of the scale between grades awarded and evaluation numbers, so that if you're a weak professor, handing out A's can increase your evals (science profs, especially in intro classes, don't usually have the luxury of very skewed grade distributions). At the upper end of the scale there seems less of a correlation between grades and evals, and in fact some of the very highest evaluated profs give out some of the lowest average grades (your truly included).
There there are the comment-sheets, like that published by the Dartmouth Review, in which anonymous students can savage a professor. These are completely useless, imho, since we have no context as to what other beef the student(s) may have with the prof. A cheating scandal, a refusal to give an extension, a disagreement over partial credit... anything can cause a student to blow up and write a scathing negative review. I've had colleagues get comments like "never available outside class" when I know for a fact that these profs are on campus 8-5, at least four days a week. But because the student didn't want to come in for 8:30 a.m. office hours, he or she wrote "never available."
Thus I give you Prof Commodari's take on his being listed as a terrible professor in the Dartmouth Review:
It'stoo late for me to go into the innaccuracies in that Dartmouth Review 2004 (In the same article they rake the new President of Dartmouth over the coals for being an "outsider"). I never taught Chem 5 there. I only taught Chem 6, after declining the invitation the year prior, in the Spring of 2004 (March 15-June 30, 2004) during mud season. There is a revolving door of temporary faculty at places like Dartmouth where the tenure-track and tenured faculty often need a break from teaching freshman chemistry. The fellow before me left and never returned. I took the job as an oportunity to checkmy experiences in teaching in mostly public non-priviliged settings with one in an ivy school. In my class of CHEM 6 atDartmouth, there was an element of students who could not adapt to my teaching style, using power point to allow for more time for problem solving. This 10% took it upon themselves to compare me in a "survey" / petition to a tenured faculty member who did nothing but chalk with his back to the class, as opposed to the more interactive approach that I use, after 15 years of evolution. It was awful. They passed this survey in my class unbeknownst to me, until other students came to me, and upset a good part of the class that was very content with my teaching. I had made the mistake of "blitzing" or e-mailing the whole class with the class emails unhidden (I did not bcc), and this small element would e-mail the whole class, unbenownst to me, trying the to convince the other students to have me give up the powerpoint. This was only one week into my teaching there. They made it very difficult for me, I did try to appease the minority by adding more chalk-talks, but the minority had already poisened the atmosphere in my class. At the end, my class average was a B+ higher than the B that was the Dartmouth "traditional" grade for the class, and I was pressured to change this, but I could not justify it based on Z score calcualtions etc. I can assure you that my standards were no less than anyone who taught Chem at Dartmouth, as I teach with a focus on substance. Unfortunately, the 90 % of the students who were happy don't post on blogs. Let me say, that in all my years of teaching, the students, five years later, are most thankful to those professors who were rigorous and layed a strong foundation in the basic (chemistry) class in preparation for the future, even if at the time they might not have appreciated a professor who always challenged them to go beyond the average.
Fernando Commodari, Ph.D.
I want to focus on the last sentence of prof. Commodari's email: "Let me say, that in all my years of teaching, the students, five years later, are most thankful to those professors who were rigorous and layed a strong foundation in the basic (chemistry) class in preparation for the future, even if at the time they might not have appreciated a professor who always challenged them to go beyond the average. " That kind of rigor (and I have no reason not to take Prof. Commodari's word for it) can alienate students who are overly-concerned about GPA and also students who have a different comfort level with different styles of teaching.
So how do we evaluate teaching? At Wheaton we focus on the actual comments by the students rather than their numeric evaluations. We look for things like use of superlatives, discussion of the difficulty of the class, and phrases like "tough but fair." If students give specifics of what they learned, we take that as a big positive. But that still isn't a very good metric, and so our knowledge is of a meagre and unsatisfactory kind. What we really want is for students to have learned a lot from a course. If they did, it was a good course. But we don't have very good ways of measuring that knowledge without using up resources on before-and-after evaluation (costs time and money to develop instruments, grade them, administer them again, and do the statistical work to make meaningful comparisons).
In the meanwhile, it's worth being skeptical of student comments either in anonymous college publications or at sites like ratemyprofessors.com or whatever it is.