Thursday, June 08, 2017

Assessing Tutoring


For the sake of argument, let's say that you're trying to assess whether (or the degree to which) a tutoring center helps with student success.  (I’m not trying to single out tutoring; the same argument could apply to advising, or the library, or to any number of things.  I’ll just use tutoring here to make the question clear.)

You could look at usage figures to start.  Presumably, if usage is minimal, you can infer that not much is happening.  If usage is substantial and sustained, you can infer that at least some students see value in it.

But that's pretty indirect.  By that logic, a popular movie must be a good one, and a little-seen movie must be terrible.  We know that's not true.  

We can, and do, administer student surveys to see if they’re happy with their tutoring.  That’s useful, but still indirect if the goal is improved academic performance.

Assuming you had pretty good data systems, you could compare the grades (or completion rates) of students who got tutoring with the grades of students who didn't.  But that, too, is a noisy indicator.  Say that students who received tutoring got an average of a half-grade higher than those who didn't.  Is that because of the tutoring, or is that because the students with drive and initiative are more likely to bother to show up for tutoring?  If it's the latter, then you're confusing correlation with causation.   Alternately, it may be that the students on the cusp of failing are likely to seek tutoring, while those who are comfortably acing the course don't bother.  Average the two effects together and you wind up with statistical mush.

That's the scientific term.

The question matters when it comes to resource allocation.  Assuming finite resources, dollars spent hiring more tutors are dollars not spent on hiring more faculty, or more advisors, or more financial aid counselors.   If tutoring helps on its own, then there's an argument for beefing it up.  If it's mostly just a way to identify the self-starters, then beefing it up wouldn't help much.  If anything, it might even water down its value as a sorting mechanism, to the extent that a sorting mechanism has value.

The question isn't unique to tutoring, of course.  It applies to all sorts of interventions.   We know that students who join campus clubs complete at higher rates than students who don't.  By itself, that could be because clubs offer the benefit of a sense of belonging and a group of friends, or it could be because successful students are more likely to join clubs.

Presumably, you could settle the question with control groups over time.  Take two largely similar groups of students, and ban one of them from tutoring.  Then measure outcomes.  But that raises some pretty obvious ethical questions for the ones who are paying the same tuition and are banned from tutoring.  I'd prefer not to try that method.

I'm guessing that Brookdale isn't the first college in history to face these questions.  For purposes of our own outcomes assessment, it would be nice if we could answer them not just globally, but locally: in other words, I'm less interested in the success payoff of tutoring generally than I am in the success payoff of ours specifically.  That means figuring out reasonably simple approaches to local data.


Wise and worldly readers, is there a reasonably straightforward way to separate correlation and causation on the local, campus level?  I assume that our tutoring center helps, but it would be nice if I had something resembling evidence.