Monday, January 08, 2007

Rubrics and Templates and Bears, Oh My!

If you really want to get faculty backs up, start dropping terms like 'rubric' or 'template.'

In a recent meeting with some other deans, we discussed the ways in which teaching effectiveness is evaluated. (As a cc, teaching effectiveness is the single most important component of a professor's evaluation.) It quickly emerged that there's no single way to do it. We do class observations, but there's no set list of criteria to use in judging whether a given class worked or not. We have end-of-semester student evaluations, which are easily quantified but of limited value. For reasons I still don't fully understand, we don't even have access to some fairly basic data, like grade distributions and drop/fail rates.

One of my colleagues opined that, in the absence of some pretty clear templates, we're setting ourselves up for lawsuits if some professor takes issue with her evaluation. Since class observations now are basically holistic writeups of what went on, along with the 'expert opinion' of the observer as to whether it worked, it might be hard to defend any given opinion in court. Playing anticipatory defense (which is a pretty good description of administration generally), he suggested codifying as much as possible what a 'good class' looks like, distributing the template/rubric ahead of time, and just checking off the various items (“delivered material to multiple learning styles”) as we go.

I could see his point, but something about the idea makes my skin crawl.

It's easy to come up with a list of teaching practices that, all else being equal, would generally be considered positive. It's harder to engage every single one of those in any given class period, or in every single discipline. When a given observation results in checking off, say, only 50 percent of the boxes, is that good or bad? This strikes me as the wrong question.

For example, our lab science classes are typically split into two components: lecture and lab. If you observe the lecture, you'd come away criticizing the professor for being abstract. If you observe the lab, you'd come away criticizing the professor for not explaining very much. If you observe both, you're violating the faculty contract.

In other disciplines, other issues crop up. It's easier to do 'hands-on' in a ceramics class than in a philosophy class – does that make the ceramics professor good and the philosophy teacher bad? While we generally think that incorporating technology is a good idea, is it automatically true that PowerPoint makes a class stronger? (I've seen both effective and ineffective uses of it.) For that matter, is it automatically true that lectures are bad? Is ambiguity always bad? (In my perfect world, “learning to tolerate ambiguity” would rank as one of the most important goals of higher education. How that's supposed to happen without ever encountering ambiguity is beyond me.)

Anticipatory defense can be prudent, but it can also lead to valuing all the wrong things. Just as faculty resent teaching to a test, I'd expect them to resent teaching to a template, and rightly so. Some of the best classes I've seen – hell, some of the best classes I've taught – have been at least partially serendipitous, the result of an unanticipated student comment that sent the discussion reeling in exciting ways. The ability of the professor to react to the teachable moment, to call an audible when the defense lines up in an unexpected way, is part of the professor's professional toolkit. The ability to recognize when that's done well or badly is part of the evaluator's professional toolkit. That's not to deny that evaluators can be obtuse or that class discussions can go horribly wrong, obviously, but to say that those risks are part of the cost of doing business.

From a liability standpoint, rubrics are appealing. They lay out in easy-to-digest form some nice, neutral criteria by which you could defend any particular adverse finding. They place a 'check' on any given observer's power to notice only what he wants to notice. They ought to be able to prevent the worst abuses. But they do so at the cost of conveying the overall success or failure of the class. I've seen professors break rules and succeed, and I've seen classes in which I couldn't quite specify why, but it just didn't work. Rubrics are based on the fiction that the whole is exactly the sum of its parts.

Have you seen a rubric that really worked well in capturing whether or not a given class was successful? If there's a way to play anticipatory defense and still capture what I need to capture, I'm all for it.