Noted media scholar and friend-of-ProfHacker Jason Mittell has been experimenting with a new way of grading, called “specifications grading,” on the grounds that “figuring out a way to rethink the culture of grades would be the most effective and impactful reform” available at a school such as Middlebury.
Mittell borrows specifications grading from Linda Nilson (also see her book), and in Mittell’s description at least, it sounds very like what many of us know as contract grading (see also), except without the pesky legal/economic metaphorical implications. He provides a gloriously detailed account of his new syllabus’s grading statement here, in which he explains both what he’s trying, how it connects to the learning goals of the course, and, most importantly, the different ways student choices about their interests and time and effort can drive their grades.
I mention all this background becuase Mittell has posted his first update on how the specifications grading experiment is working out. He’s posted sample prompts and spefications, so it’s possible to get a good understanding of what he’s trying. He’s already learned a lot from his first efforts:
But the Satisfactory / Unsatisfactory marks do not correspond to Right / Wrong, nor Pass / Fail. This was one of the crucial insights I gained by marking these essays: Unsatisfactory really means “not done yet.” As I described to my class ... think of it like when your parents ask you to clean your room: you tell them that you’re done, they assess your work and say “not done yet!,” giving you a chance to keep cleaning to meet their standards. This is also more comparable to how projects are assessed within many professional worlds, where work that doesn’t meet expectations will require another round (at least) of work to bring it up to snuff. With a list of clear specifications, there are a range of ways that an essay might not be Satisfactory: some of the Unsatisfactory essays cited sources inappropriately, while a few included some factual inaccuracies. The most common reason why these essays did not meet the specifications was that they did not clearly iterate five distinct points, either through ineffective structure that muddied the analysis, or including multiple points that were too similar (e.g. it’s a clone and it’s an imitation).
It’s also raised an interesting conceptual problem: how, if at all, ought the difference between meets and exceeds expectations be registered? In Mittell’s experiment to date, these performances would both be “satisfactory,” and both might well get As. Mittell comes to an important realization:
So what is the difference between meeting and exceeding expectations? On my assignments, typically it’s elegance and style in writing, subtlety of analysis, originality of insight, and depth of thinking. These are not learning goals for the course, and they are not things I directly teach—obviously I value all of these elements, and try to model them in leading discussion and assigning exemplary readings, but I do not focus on such advanced abilities in this introductory course. This is the crux for me: the students who are exceeding my expectations are doing so based on what they bring to the course, rather than what they are learning from the class.
It sounds like a great experiment--and a great reason to read Nilson’s book!
Do you have experience with specifications grading? Please share in comments!
Photo “Specifications” by Flickr user Peter Morville / Creative Commons licensed BY-2.0