Using clickers for peer review of proofs

September 15, 2011, 8:52 pm

Right now I’m teaching a course called Communicating in Mathematics, which serves two purposes. First, it’s a transitional course for students heading from the freshman calculus sequence into more theoretical upper-level math courses. We learn about logic, how to formulate and test mathematical conjectures, and we spend a lot of time learning how to write correct mathematical proofs. And therein is the second purpose: The course is also labelled as a “Supplemental Writing Skills” course at Grand Valley, which means that a large portion of the class, and of the course grade, is based on writing. (Here are the specifics.) It’s a sort of second-semester, discipline-specific composition class. (Students at GVSU have to have two of these SWS courses, each in different disciplines.)

As a writer (of sorts) myself, I really enjoy this kind of setting. It fuses together what I think are two of the most powerful forms of human expression: the written word and rigorous mathematical analysis. For many students, though, it’s a major culture shift from what’s familiar to them about math. The math here is not just an easily-mastered algorithm, and there may be more than one “right answer” to a problem — or a continuum of “answers” having varying levels of “rightness”, or maybe nobody knows if there’s an answer of any kind in the first place! This course is an introduction to the wide-open, exploration-heavy world that mathematicians actually inhabit. And it can be a bit like being dropped off in the middle of an alien world if you’re a student.

To help students get comfortable and skilled with proofs, I’ve enlisted the help of classroom response systems, also known as “clickers”. I’ve been teaching with clickers for the past couple of years, usually alongside pedagogies like peer instruction where those devices are well-suited. It’s worked amazingly well. So, playing off an idea I first read in Derek Bruff’s book on clickers, we’ve introduced peer review of proofs in the Communicating in Mathematics class. Here’s how it works.

Students are put into groups of 3 or 4 and given a mathematical statement to prove (or a conjecture to either prove or disprove). They work collaboratively for a certain period of time to come up with a team writeup of their proof. I try to give simple problems that can be done, more or less, in 15 minutes. One person on the team is tasked with creating a clean, formal writeup that adheres to our class standards for writing. Those are spelled out in our textbook.

Students hand their team writeups in to me. I randomly select one of them and display it on the document camera in the room (covering up the names to preserve privacy). Students are instructed to read quietly for a minute or two. While they are reading, they are tasked with grading the proof using three criteria:

  • Mathematical correctness — Are all the calculations correct? Are definitions being stated correctly? Are previous theorems being stated correctly and invoked appropriately?
  • Logical soundness — Is every step in the proof justified? Does the proof assume only what it is allowed to assume? Are all arguments carried out correctly?
  • Clarity of writing — Is the language used correctly? Are the words spelled correctly? Is there English text used to set up and explain mathematical work? Is there too much English?

Once the students have had a chance to read the selected work, they use their clickers to rate the proof on each criteria, one at a time, followed by a discussion of the vote. Ratings are on a scale of 0 (= terrible) to 4 (= perfect), so the entire proof is “graded” on a scale of 0 to 12.

This system does several things that are good for students. It provides evidence that students can get a halfway-decent draft of a proof done in a short period of time. It also puts students’ work under the microscope not just of the instructor but also of their peers, which hopefully gets students into the mindset of writing well rather than just writing to give the professor what s/he supposedly wants. It teaches students how to listen — that’s part of communicating in mathematics too! — and to receive feedback meant to help their work. Finally, the 0–12 “grading” scale with three different criteria happens to be the rubric I use for actual grading of their proofs, so they are being trained to see their own work the way an expert would.

Here’s an example of how this actually went recently in the class. Students were asked to prove the statement: If m is an odd integer, then 3m^2 + 4m + 6 is also an odd integer. Here’s the student work that got randomly selected from the pile (click to enlarge).

When the students voted on “mathematical correctness” on this, all but two students graded it at a 4. When I asked if there were any questions, one of those students responded that he felt the proof had no mathematical mistakes except for the fourth line from the bottom, where q was written in terms of x. I had been thinking that this was more of a clarity issue — they’ve introduced a new variable to replace an old one, which isn’t really wrong, just confusing — but I think the student had a point. Interesting aside here: One of the students in the group that wrote this proof emailed me later to tell me that the student knew the “x” was wrong. So there was not unanimity within the group on this, and the review process gave that student something of a dissenting voice.

The “logical soundness” vote was unanimously a “4″, so I just stopped to highlight what was so well-done about the proof in terms of the easy-to-follow justifications and so on.

When students voted on “clarity of writing”, it was 50% at a “4″, 35% at a “3″, and 15% at a “2″. Some of that was due to the introduction of the “x” variable, but students had some other sharp observations. One student felt like starting the line right after “Using algebra, we show that” should not just start with an equals sign. Interesting point, I said; anybody else feel that way? What can we do to improve the clarity here? We ended up deciding to put 3m2 + 4m + 6 on the left side, and everybody agreed this simple step helped. Another student wondered if “This implies that” in the fourth line from the bottom could be clearer or shorter; still another student was confused on what the word “this” was pointing to. We eventually decided to replace that phrase with “therefore”, which we all thought was a good lesson in how shortening your proof often makes it clearer. (Someone also pointed out that the ambiguous “this” was a lot like starting a column of equations without a left-hand side.)

I was really impressed with just how hard and focused my students were thinking about this proof. They brought up questions I would not have thought of and improvements that everyone found useful. They all seemed genuinely invested in making it an excellent bit of writing, not just passable or “what the professor wants”. Perhaps they saw a bit of themselves in the mirror when they saw other students’ writing. When you’re a student in a strange class learning a new and possibly threatening twist on a subject you thought you had figured out, what a boost it must be to know you’re surrounded by people who want to make you better!

Finally, I’m certain that clickers are not necessary for this practice. But they do add a nice layer of anonymity on the process, like a double-blind review of an article, and the voting data is available immediately. And it gives less outspoken students in the class (there are lots of those in a math class) just as much of a voice as anybody else.

How about you? Are you doing anything similar in your own classes? See any places for improvement or modification in this system?

This entry was posted in Clickers, Educational technology, Math, Peer instruction, Problem Solving, Teaching, Technology and tagged , , , , , , . Bookmark the permalink.