Although the recent weather down here in Atlanta suggests that we’re nearing May and the semester’s end, it turns out that we’re only getting close to midterms. The middle of the semester is a great time to take stock of how your courses are going. One approach is to conduct a mid-semester self evaluation, asking yourself what’s going well and what you can do to improve the rest of the semester.
While knowing thyself is useful, it’s also useful to know what your students are thinking. That’s why we at ProfHacker have written about giving midterm evaluations to students every year since our blog was born: Billie covered the Small Group Instructional Diagnosis as a midterm model in 2009; Amy asked her students to evaluate their own performance in 2010; and George provided four simple questions for a midterm evaluation in 2011. (Guys, I’ve got 2012 covered. Holler!)
It turns out that if there’s one thing that we at ProfHacker like more than giving students midterm evaluations, it’s Google Docs. (You want posts about Google Docs? We got ‘em!) Two weeks ago I wrote about using Google Docs to check whether your students have done their reading. Today, I’d like to suggest using Google Docs to streamline the process of giving students a midterm evaluation, using a very similar approach.
Before class started, I created a document in Google Docs with two questions: “What is working well so far?” and “What could be done better?” I then inserted a series of blank bullet points underneath each question. After changing the settings for who could see the document and giving everyone who had the link the right to edit it, I shared the link with the class. I then asked them to answer the two questions and to be as specific as possible in their answers. The students then took the next five minutes to write simultaneously on the evaluation, all of doing so anonymously.
I think that there are two advantages for using Google Docs for collecting midterm evals in this way. In the first place, I find that students write more on open-ended questions when they can use a keyboard than when they have to write by hand, as I’ve done in the past with midterm evaluations. (Since it was an “Introduction to Digital Humanities” all of the students had computers in each class.) But I could got the same benefit if they’d answered individual copies of a survey, even using Google Docs Forms, as Thomas R. Burkholder has discussed previously. So why, then, did I make the evaluation collaborative, albeit with anonymous contributions? It is precisely this public nature that I saw as being the second advantage of this approach. I wanted the students to know what their peers though were the strengths and weaknesses of the class. They would be able to see what they advised me to do differently through the rest of the semester and could then hold me to it. (It’s this accountability that led Mark to make his teaching evaluations public. I’ve similarly put all the evaluations I’ve ever received on my own website.)
As pleased as I was with this quick hack, when I shared it with some colleagues in the social sciences, they wondered if the approach could lead to groupthink and an averaging of responses. It’s an interesting question, and the next time I teach multiple sections of a course, I’ll conduct some experiments.
Do you have any different approaches for giving midterm evaluations besides a typical survey? If so, please share with us in the comments!