by

Cross-Disciplinary Grading Techniques

A picture of a young man writing on a transparent writing surface.In some academic fields, such as the humanities, open-ended questions in essay form are de rigueur for assessment purposes. In my specialty, part of the sciences, assessment tools are often designed to prompt the student for a single, final (and often numerical) answer. It’s usual for instructors to deduct a number of points here and there when students omit a negative sign or make an algebra misstep in the development of the solution. But as one of my colleagues put it the other day, this type of grading is tedious, especially for very large classes. In the end, you end up feeling like you’re adding up change in a complicated transaction.

When I first started teaching, I subscribed to this method of grading, because it’s how my tests were graded when I was a physics major. But for the past year or so I’ve been pondering ways that my grading methods for these type of questions can benefit from procedures long used in the humanities for grading open-ended assessments. After all, our goals for assessment are not all that different: I’m as interested in the student’s development of the solution as much as the final answer.

So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities. I grade open-ended problems with a simple 4-point rubric, which is available on the syllabus from day one of the course. A student gets four points on a problem if their solution is complete, narrated, well-developed, and having no more than one minor error (pesky negative signs sometimes included, depending upon the magnitude of that type of error). Three points are received if a correct solution is given with few errors, but the solution is not as sophisticated as a 4-point solution. Two points are received if the solution is reasonable and complete but incorrect. Finally, one point is given if the solution was from way off in left field (akin to a student not fully answering an essay prompt, I imagine.)

This method has revolutionized the way I grade. No longer do I have to keep track of how many points are deducted from which type of misstep on what problem for how many students. In the past, I often would get through several tests before I realized that I wasn’t being consistent with the deduction of points, and then I’d have to go through and re-grade all the previous tests. Additionally, the rubric method encourages students to refer to a solution, which I post after the test is administered, and they are motivated to meet with me in person to discuss why they got a 2 versus a 3 on a given problem, for example. This opens up the opportunity to talk with them personally about their problem-solving skills and how they can better them. The emphasis is moved away from point-by-point deductions and is redirected to a more holistic view of problem solving.

I wonder what and how assessment techniques could be shared across disciplines. For example, I imagine that rubrics could be useful in grading part-writing in a music harmony class, in the same way they’ve been useful to me.

What suggestions do you think might be helpful for your colleagues in other fields? How can we facilitate the sharing of these methods? Let us know in the comments.

[Creative Commons licensed photo by Flickr user stuartpilbrow.]

Return to Top