by

Engaging Students Through Tests

bluebook

I took a lot of “blue book” exams when I was in college — and I was good at them. In case you’re not familiar with the “blue book,” it is a thin booklet filled with lined paper, typically available for purchase at the university bookstore, which students use to complete essay exams. Sometimes I wrote very targeted answers to the essay questions, but I also knew and occasionally used the strategy of writing everything I knew that was even tangentially related to the topic and hoping that I hit the answer the instructor was looking for somewhere in my long response. This strategy served me well.

As an instructor, I have had no desire to recreate this scenario and read the everything-I-know-about-things-related-to-your-question essays. Sure, I could tell students not to write these kinds of answers, but if they do anyway, it is hard not to give them credit if they include an appropriate answer somewhere within a more long-winded response. I have also realized that I am not interested in testing students’ ability to write essays in a timed context.

This realization left me with the question of what and how I did want to test — and whether I wanted to test at all. I was thinking about this question at the pedagogy workshop at the SHEL-10 conference (Studies in the History of the English Language) this past weekend, where participants had a lively discussion about what instructors ask students to memorize about the history of the English language.

In the end, I believe, the answer to my question comes down to learning goals. If we structure our courses from the beginning around specific learning goals, then we can better determine whether testing will effectively enhance and assess students’ learning — and if so, how the test(s) should be structured to do that.

Writing a good test is hard. I think it is reasonable to ask how we can make test-taking as interesting a learning experience as we try to make lectures, essay assignments, and other components of our courses. To do this, we need to interrogate the types of tests we took as students and learn from the ones that seemed to actually engage our learning and the ones that did not. If we do passage identification from literary works, what does it achieve for students and for us? Are there things we want students to do beyond identifying where the passage came from? If we use blue-book exams, how can this timed writing exercise best help students pull together and display their learning? And so on.

A specific example from my own teaching may be helpful here. In my survey courses (e.g., the introduction to English linguistics course, the history of English), I continue to employ midterm and final exams because I think they usefully ask students to review and synthesize technical material across units and connect that material to broader questions. Note the verbs here: review and synthesize. I do not ask students to memorize the technical material. I feel like if I ask them to memorize it, then all I can fairly do is ask them to reproduce it (or “regurgitate it,” to use an unfortunate phrase in the academy). It is much more interesting to me to give them new problems or puzzles to solve on the exam, such that they can apply the technical material in these new situations. That is the learning I want to assess.

For this reason, I allow students to create an 8.5 x 11-inch “cheat sheet” (both sides) for each exam. I also provide an extensive exam review sheet that details what information I expect students to control and synthesize. It does not seem interesting to me to see if they can guess what information I think is most important. I would rather students spend the time working through the most important material and trying to reinforce their understanding of it so they can apply it. Students consistently report that by the time they have finished creating their (often very elaborate, color-coded) cheat sheets, they barely need them in the exam itself. The learning and synthesizing has already happened. For the students with testing anxiety, the mere presence of the cheat sheet can also help alleviate the stress that can get in the way of their ability to demonstrate what they know.

The cheat sheet allows me to write more interesting tests, and I take it as a personal challenge to make the tests worth students’ time. After all, if they are going to put the time into studying, the exam should then be a thought-provoking challenge, which ideally will help them see what they have learned. New puzzles and problems also have the potential to teach students something new in the context of the exam. (And if I can make them laugh at least once during the exam, all the better.)

The focus on learning goals has also changed the kinds of questions I ask on exams. For example, one of the course goals in my survey courses is that students will be able to explain specific ideas about language authority, variation, and change to people who have never taken a linguistics course. Well, if that is one of my goals, then I should test that. So I now write exam questions like these (where students are expected to write a detailed paragraph in response):

  • You are discussing the word groutfit with a friend, and she says, “I use it all the time, but it’s not a word. I checked the dictionary and it’s not there.” Given what you have studied this term, provide an informed response (written as a spoken response) about whether or not you think this is a word, whether you would call it slang and why, and the role of a dictionary in determining what counts as a word.
  • You are having dinner with a good friend and his parents. Over dessert, his parents start criticizing the way that young people speak, with lots of like, you know, I mean, dude, well. “Useless clutter,” they say about these words. Make a concise and persuasive argument to these parents that these forms are not always useless clutter.

When students get into the spirit of the exam, the responses to these questions are often smart and delightful (e.g., one that begins, “Well, parents of my good friend, I beg to disagree about the conversational clutter … ”), which makes reading and grading the exams more rewarding as well.

To be clear, I have not figured out how to write the perfect exam. Sometimes my exams are too hard or too long, or a question I was excited about turns out not to generate especially interesting responses. But I have found that allowing myself to think more creatively about the format of exams and requiring myself to link the structure of exams to my learning goals has made the testing experience more rewarding for students and for me.

Return to Top