I’m really excited to be working next semester as a co-PI on a National Science Foundation grant with my Grand Valley State colleagues Scott Grissom (Computer Science), Shaily Menon (Chemistry), and Shannon Biros (Chemistry). We’re going to be interviewing a large number of GVSU faculty to try to understand why some of us adopt research-based instructional methods like peer instruction and why others don’t.
As we were putting together the grant proposal earlier this year, one statistic really impressed the importance of this study on me. GVSU is a fairly big place – we have nearly 25,000 students on multiple campuses with both undergraduate and graduate degrees offered. I don’t know how many sections of courses we offer in a given semester, but it’s got to be in the thousands. We have over 40 sections currently running for just College Algebra! And yet: How many sections of courses offered a GVSU last Winter semester required students to have clickers for class? Would you guess maybe 100? 50? Try… 15. And three of those sections were my classes.
The presence of clickers in a class doesn’t mean anything. You can teach well without them and poorly with them. But some of the best-established research-based teaching practices, most notably peer instruction, use clickers. You can do PI without clickers, but it’s not commonly done. So on my campus, only 15 sections out of a very large number are using clickers, and presumably fewer than that are using peer instruction. And those 15 sections could represent as few as five distinct instructors. I don’t expect peer instruction to be widespread, but still: We have over 20 years of data that point to PI being highly effective in physics instruction. It’s exceedingly easy for faculty using a traditional lecture model to transition to PI. And PI doesn’t fly under the radar at a place like GVSU where there are a lot of us using non-traditional teaching methods in physics, math, and elsewhere. And yet very few are using PI or even trying it out.
Why is that? The research we’ll be doing will be trying to get to the bottom of this, but there could be many reasons - perceived risks to promotion/tenure prospects, not enough money for equipment, not enough information about different teaching practices, bad experiences with non-lecture methods in the past, and more. It will be interesting to hear what people say.
One of our consultants on the grant is Charles Henderson from Western Michigan University, who has done similar work in the past and who also got one of the same grants we received. I just read through his paper “Why do faculty try research based instructional strategies?” [PDF] with Melissa Dancy and Chandra Turpen, and they had some interesting findings about where faculty who use research-based practices like peer instruction first heard about those practices and why they chose to adopt them. All of the respondents in their study were physics instructors, and the study focuses mostly on peer instruction. They found that the most commonly reported first contact with peer instruction was one-to-one interactions with a colleague (either at their home institution or at a different one). Reading about PI in book — meaning this book — was a distant second, even though that book was at one point given away to physics instructors in hopes of getting them interestin in PI. And journal articles? Only 27% reported hearing about PI first from that source, which is remarkable given that PI gets quite a lot of play in widely-read journals like the Americal Journal of Physics.
What was more interesting to me was that when faculty were prompted for why they chose to use peer instruction, the majority reported that if came down to intuition — that PI was consistent with their sense about how students learn best. Yes, there are data to support this claim — two decades’ worth — but that seemed to be secondary. The data are nice, and it’s better than having data that frankly refute peer instruction, but there’s nothing to indicate that PI users would not choose PI if the data weren’t there. The second most commonly reported reason was that faculty felt that lecture was unsuccessful and wanted an alternative. In each of the top two cases, it boils down to one’s gut rather than data (although the data don’t contradict the gut).
The study doesn’t address faculty who know about these kinds of methods and choose not to adopt them, or faculty who have tried them in the past and stopped. Are they going with their gut too? Do they have data that suggest that these kinds of methods don’t work any better than lecture? Or do they distrust the data that seems to be in favor of research-based methods? What’s stopping people from adopting practices that seem to work very well? It would be interesting to know in some systematic way — we all have our hunches, but hunches are not data. And in fact Henderson et al. have a later paper on just this subject, which I hope to get to later this week.