This is a guest post by Derek Bruff, assistant director at the Vanderbilt University Center for Teaching and senior lecturer in mathematics at Vanderbilt. You can follow Derek on Twitter (@derekbruff) and on his blog, where he writes about educational technology, student motivation, and visual thinking, among other topics.
I teach math courses, but for the first time this fall, I’m teaching a first-year writing seminar. It’s still a math course, since the topic is cryptography (codes and code-breaking), but it has a healthy dose of history and, perhaps most significantly, writing. I’ve not taught writing before, as you can imagine, but I’m embracing the chance to broaden my teaching horizons.
Teaching a writing seminar has also provided me with an opportunity to use clickers in ways that are new to me. Classroom response systems (“clickers”) are technologies that enable teachers to rapidly collect and analyze student responses to multiple-choice questions during class. The teacher poses a question, each student submits his or her answer to the question using small handheld transmitters that beam data via radio frequency to a receiver attached to the teacher’s computer, and then software on that computer displays a histogram showing the results. This system sounds simple, but it’s a surprisingly flexible tool for engaging students in deep learning.
One of many components of teaching writing that’s new to me is the in-class peer review of student drafts. After hearing from some colleagues outside of mathematics about how they use clickers to facilitate peer assessment of student work, I’ve now had a chance to try this technique in my own class. What follows is a description of this first experience using clickers to facilitate peer review. For faculty who don’t typically teach writing, I hope you’ll find my rookie observations useful. And for faculty who have more experience teaching writing, I hope what follows will inspire you to think about how you might use clickers.
My students’ first essay assignment, a three-page opinion piece addressing one of three questions I provided, was due recently. For the class session before the papers were due, I asked for a volunteer to share his or her draft with the class so that we could have a conversation as a class about writing in the context of this assignment. Five or six of my students volunteered their papers, and I selected a draft from a student I’ll call Steve.
I had already given my students a copy of my grading rubric for this assignment, so they had some idea what I expected of them in their papers. The rubric described levels of quality (poor, acceptable, good, excellent) in each of nine categories (four that addressed the content of their writing, three that addressed the clarity, and two focusing on presentation). For the peer review activity, I passed out copies of my rubric and asked my students to evaluate Steve’s paper in each of the nine categories.
For example, one of the categories was “clarity of opinion.” Here’s how I described levels of quality within this category:
- Poor - There’s no evidence that the student has an opinion about the central question.
- Acceptable - The central question is addressed, but the student’s position on that question isn’t that clear.
- Good - The student’s core position on the central question is explicitly stated and would be clear to other students.
- Excellent - The student’s core position on the central question is very clear, as is the student’s position on one or more related questions.
After the students had a chance to evaluate Steve’s paper using the rubric, I broke out the clickers. I had selected several categories for us to discuss, and, for each category, I asked the students to respond anonymously to a clicker question with their evaluation of Steve’s writing—poor, acceptable, good, or excellent. I displayed the results of these clicker questions on the big screen, and we used them to discuss Steve’s writing. These discussions, grounded in the concrete example of an essay in front of us, were intended to help my students better understand my expectations of their work and to learn to evaluate the quality of their own writing.
Here are the results for the clicker question that asked students to evaluate Steve’s paper in the “clarity of opinion” category:
Most students were split between “good” and “excellent.” When I asked for reasons why students rated the paper as “excellent,” one student (let’s call him Dave) indicated that Steve’s opinions were very clear. I asked Dave to point out a sentence or two where Steve’s opinion was most clear. This set the tone nicely for subsequent student comments, since they realized I wanted them to back up their evaluations with examples. I rated Steve’s paper as “good” in this category, noting that his opinion statement for the main question was awkwardly worded, making it unclear. A couple of students also noted that the primary statement of opinion was located at the end of the second paragraph, which made for a weak opening to the paper. They would have preferred that the thesis statement come earlier in the work.
How did clickers help here? I suspect that had I not used clickers, Dave would have still volunteered his opinion that Steve’s central opinion was very clear. Dave speaks up often in class, and he wouldn’t have risked upsetting Steve by praising his paper. However, I’m not sure if my students would have made those comments about Steve’s weak opening without the clickers. The bar chart on the screen made it clear that more than half the students felt Steve’s clarity of opinion was less than excellent, and I suspect that this knowledge made it more socially acceptable for a couple of students to volunteer specific reasons why.
The question about mechanics (grammar, spelling, punctuation, and such) provided me with an opportunity to clarify my expectations for this aspect of my students’ writing. Here are the results of this question:
I asked students who had rated it more poorly to identify mechanical problems in the paper. The first student to reply noted that Steve hadn’t always followed the word however at the beginning of a sentence with a comma. This was, perhaps, the last thing I would have noticed about Steve’s paper. It was clear that at least one of my students was viewing mechanics very differently than I view them, since this kind of error seems very minor to me. I pointed out my definition of good mechanics on the rubric—mechanical problems that don’t slow down my reading of the paper. This might be a lower bar than some writing instructors would have, but the students certainly appreciated this clarification!
What role did the clickers play here? I can’t say for sure, but I would guess that when the students saw differently they had rated the mechanical aspects of Steve’s writing (with lots of votes for “excellent” but just as many votes for only “acceptable”) they realized that this was a category they needed to attend to. This attention meant that I had a rapt audience when I clarified my version of “good” mechanics.
Another clicker question asked students to rate how well Steve had established the importance of the question he tackled in his paper: Should the public have access to secure encryption technology even if that means the government has a harder time with police work and anti-terrorism efforts? The results:
Almost all of the students gave Steve an “excellent” rating in this category. This didn’t generate much discussion (other than a few students noting aspects of Steve’s paper that established the importance of his question), but I included the results here to show that sometimes the students were most on the same page as each other and as me. (I would have given Steve an “excellent” here, too.) I suspect that Steve felt pretty good when he saw that bar chart.
Finally, I asked students to indicate how well Steve had established some personal connection to his topic. This led to the biggest spread of results of the day:
I looked at those results and said out loud, “Seriously?” (Perhaps not the most useful reaction, but an honest one!) The clicker question made it immediately clear to me and, I think, to my students that this area was a fuzzy one for them. How do you establish a personal connection to a topic as broad as the tradeoff between security and privacy? This led to a useful conversation where the students brought up Steve’s lack of personal anecdotes but also what seemed to some his clear enthusiasm for the topic. I mentioned that one could read passion into Steve’s work, but Steve had not done much to distinguish the paper as uniquely his.
In summary, here’s what I think the clickers added to this activity:
- By allowing students to critique Steve’s work anonymously, the students were more free to indicate areas where they felt he missed the mark. And once the results were on-screen and it was clear to students that there were several people in the room who felt similarly, students were more likely to speak up and share their criticisms in the class discussion.
- The clicker questions for which there was little consensus (i.e. most of them) also indicated to students that this evaluative work I was asking them to do was not easy and that their understanding of what constitutes quality writing might not align well with my understanding. This provided the students with additional motivation for listening and participating in the discussion of Steve’s work.
- Also, this was fun. I think the students were curious to know how their peers would respond to the clicker questions I posed!
I’ll add that when I’ve talked with other instructors about using clickers to facilitate peer assessment of student work, these are the factors they identify, too.
Have you used clickers for peer assessment? If so, how did it go? If not, what potential do you see for using them in your courses?
(And for more on my experiences using clickers in my first-ever writing seminar, check out my blog posts on teaching about academic integrity and plagiarism.)
Image by Flickr user Mr. Wright. / Creative Commons licensed.