I have been using clickers in my classes for three years now, and for me, there’s no going back. The “agile teaching” model that clickers enable suits my teaching style very well and helps my students learn. But I have to say that until reading this Educause article on the flight out to Boston on Sunday, I hadn’t given much thought to how the clicker implementation model chosen by the institution might affect how my students learn.
Different institutions implement clickers differently, of course. The article studies three different implementation models: the students-pay-without-incentive (SPWOI) approach, where students buy the clickers for class but the class has no graded component for clicker use; the the students-pay-with-incentive (SPWI) approach, where students purchase clickers and there’s some grade incentive in class for using them (usually participation credit, but this can vary too); and the institution-pays-clicker-kit (IPCK) approach, where the institution purchases a box of clickers (a “clicker kit”) for an instructor, and the instructor brings them to class.
For me, the most interesting finding in the study was that there appears to be a threshhold for the perceived usefulness of clickers among students. The study found that in the SPWOI approach, 72% of student respondents said they would buy a clicker if it was used in at least three courses they were taking per semester. But drop that number to “at least two courses” and the percentage drops to 24%! So once the saturation level of clicker use reaches something like 50–75% of a student’s course load, they start seeing the devices as worth the money, even with no grade attached to its use. (Only a depressing 13% of students said they would pay $50 for a clicker based solely on its value as a learning tool. We have some P.R. to do, it seems.)
In the SPWI approach, 65% of respondents said they would buy a clicker if the contribution of clicker use toward their course grades was between 3% and 5%. (This is sort of mystifying. What do the other 35% do? Steal one? Just forfeit that portion of their grade?) The study doesn’t say explicitly, but it implies that if the grade contribution is less than 3%, the percentage would drop — how precipitously, we don’t know.
The study goes on to give a decision tree to help institutions figure out which implementation model to choose. Interestingly, if it gets down to choosing between the SPWI and SPWOI models, the deciding factor is whether the institution can manage cheating with the clickers. If so, then go with SPWI. Otherwise, go SPWOI — that is, if you can’t control cheating, don’t offer incentives.
Here at GVSU, I use the SPWI approach. Students have to pay for the clickers, but they get 5% of their course grade for participation. I take attendance at each class using the Attendance app for the iPhone. Then, once or twice a week, I’ll cross-check the attendance records with the clicker records for the day. If a student is present but doesn’t respond to all the clicker questions, they lose participation credit for the day. This method also mitigates cheating; if a student is absent for the day but has records of clicker response, then I hold the student guilty of cheating, because someone else is entering data for them. (Putting the burden on the absent student makes it less likely they’ll give their clicker to someone else to cheat for them.)
Absent from this study is the actual instructional method used with the clickers. The authors point out, correctly, that such things are difficult to control for in a study, but still — clicker technology is basically neutral, and its only function is to enable active pedagogies. The choice of pedagogy matters. If you only use them to take attendance, students are going to have a lower perception of their usefulness than if you do something interesting like peer instruction with them.
What about you? Are you using classroom response systems in any kind of way, and if so, what method of implementation are you using, and how is that going?