The last time I posted, I made a public commitment that I would be moving away from traditional points-based grading systems and implementing specifications grading in the upcoming semester. It’s 20 days later, and after a week of in-depth trial and error (mostly error, it feels like), I have working prototypes of specs grading-centered versions of both courses I’ll be teaching. With a few modifications (that’s your cue for suggestions, readers) these are basically ready to “ship”.
Discrete Structures for Computer Science 2 is the second semester of a year-long sequence in discrete mathematics aimed specifically at computer scientists. Here is the newly revamped syllabus for the course and here is a document that will go out with the syllabus that details exactly how the assessment and grading will work.
Modern Algebra 2 is the second half of a year-long sequence on, obviously, abstract algebra. This particular course focuses on group theory (we start with rings here, then go to groups). I only put the syllabus together this morning, and it’s not 100% complete yet (as of 12/22) but here it is -- I will be making up a separate, longer document about the assessment and grading later, but an abbreviated description is in the syllabus.
Generally speaking I like how this turned out. Here’s an overview of the implementations; both courses are pretty similar, with one important difference I’ll point out momentarily.
I started by writing out all the learning outcomes I wanted students to demonstrate over the entire course. Initially, this was just a laundry list of things they should be able to do. As I wrote these out, though, in both courses I realized that the objectives fell into three categories. First, there are the foundational learning objectives that are on the base of the Bloom’s Taxonomy pyramid -- tasks like stating definitions of terms, stating mathematical theorems, building simple examples, doing simple calculations. Then there are the more advanced learning objectives that go higher in Bloom, like analyzing the structure of a graph, proving a theorem, or implementing an algorithm in code.
So far, that’s no surprise since I’ve been saying that we should separate basic from advanced learning objectives for some time now, as a means of setting up a flipped classroom structure. But then I discovered there’s a third category: There is a subset of the “advanced” learning objectives that are not only advanced, but especially important and merit special levels of assessment.
For example in the Discrete Structures course, this is one of the “advanced” learning objectives:
Given an equivalence relation on a set, determine a partition of the set using the equivalence relation. Conversely, given a partition of a set, use it to define an equivalence relation on that set.
This is more complicated than, say, determining whether two given elements are related under an equivalence relation. It would be fairly easy to categorize this as “analysis” or “application” in Bloom terms. But while this is “advanced”, it’s not what I would consider to be one of the top objectives in the course. Not like, say, the following similar objective:
Given an equivalence relation on a set A and a point x in A, determine the equivalence class of x.
This objective is somewhat simpler than the previous one, although it’s still “advanced” in the sense of not being mere recall of a definition or a simple calculation. (It’s not a simple calculation in general, because determining an equivalence class often involves conjecturing what the equivalence class consists of, and then proving it.) But if it came down to it, I would say it’s more important that a discrete math student be able to determine an equivalence class than it is to work with set partitions. (Your mileage may vary.)
I bring this up because while coming up with these objectives was relatively easy, figuring out how to assess them, especially using specs grading, was not. First of all, the typical math course contains procedural knowledge that needs to be assessed, like computation and example-making, and usually we want to assess student mastery of this knowledge on demand and not through take-home assessments -- but most of the examples given in Nilson’s book involved take-home work done in courses without the same level of procedural knowledge as a math class. Second, upper-level math courses often involve proof, and writing proofs is... different.
So I determined that I’d assess students in the course in three different ways:
- For the simple, basic learning objectives, students will be assessed through a series of timed in-class quizzes called Concept Checks. The objectives covered by Concept Checks were labelled CC.
- The objectives that are advanced will be assessed through Learning Modules, which are what Nilson refers to as “bundles”. These are take-home, and there are two levels -- “Level 1" which involves basic work on the advanced objectives, and “Level 2" in which students get “higher hurdles”. The objectives assessed by the modules are labeled M.
- Among the M objectives are a subset of the top 50-60% that I consider to be at the core of the course, so they are called CORE-M objectives. These will be assessed twice -- once through the modules and then again in a timed setting.
Here’s the complete list of course objectives for Discrete Math and the complete list for Modern Algebra.
The documents I linked above go into detail on how all this comes together with student work and assigning course grades, so if you’re interested, please read those, especially the tables that show the requirements for a particular grade. The bottom line is, students will get assessed on the basic stuff through timed quizzes (that have built-in revision opportunities), on the advanced stuff through modules, and on the most important advanced stuff through additional timed assessments (again with revision opportunities).
Here are some things that I like very much about how this system is working out.
1. There are no points on anything. Everything is graded either Pass or No Pass based on specifications. I have the specs in yet another document (here for the Discrete Math class). So there is no statistical jockeying, no grubbing for points, no treating points like magic fairy dust that I can sprinkle on a student’s performance and make them have a B. There’s a very good chance that the conversations I’ll be having with students now will be about math, rather than points.
2. And because of this, students must demonstrate mastery of a certain number of topics to earn a passing grade in the course. They cannot get a B by getting 80% on everything and 100% on nothing.
3. It puts students in control of their grades. Students are opting into the level of attainment they want and the workload they get, rather than being forced to do all the work and succeeding partially. And it’s transparent: If a student wants to know how far off he is from a B, he just looks up the list of requirements to get a B and compares it to his present body of work.
4. It makes students be more intentional about their work. Students have to start off the semester by thinking carefully about their strengths, weaknesses, and ambitions and then setting a goal for themselves for their grade, and then they know what work they need to do. And they can renegotiate that goal at any time. No more just half-trying on everything and getting what they can get.
5. It relieves student stress in some important areas. One of the things I am going to point out to students, for example, is that once they’ve reached a threshhold for a particular grade, then they have that grade, and nothing can lower it. I figured out that it’s possible for a student meet the requirements for a B around the eleventh week of the semester. If they can do that, then they have earned a B, and nothing can lower that grade because they have amassed a body of work that shows me they’ve attained a “B” level of skill.
6. The “catch”, if there is one, is that there is no partial credit on anything. However I see this as a great learning opportunity: One of the things I will have to teach students in the course is how to distinguish professionally acceptable work from work that is not professionally acceptable. This is a great thing to learn, especially in these 300/400 level courses, and frankly I don’t know why this hasn’t been a goal of my courses before now.
An important modification to the basic specs grading idea was done with the Modern Algebra class, which is proof-based. Proofs are different. It’s very easy to write a mathematical proof and be utterly convinced of its correctness, only to find a flaw -- and the line between minor flaw and fatal flaw can be extremely fine. Also, Bret Benesh made a great point when he said that specs grading presupposes that students can understand the criteria used to judge professionally acceptable work in a subject, and that while this is a realistic assumption in some disciplines, in mathematics it’s not realistic at all! It can take years, decades for a mathematics student to really come to an understanding of what makes a correct mathematical proof and what doesn’t.
So I modified the specs grading idea to have a three-level rubric rather than a two-level one: Instead of Pass and No Pass, there’s Pass, Progressing, and No Pass. Progressing is for proofs that are “almost there” but have a small number of key corrections to make. Like a paper submitted for publication that needs to be revised, you don’t want to reject it because it’s a pretty good paper, but it’s also not ready to be published yet. In my system for Modern Algebra 2, only “Pass” counts toward the requirements for a grade; but work that gets marked “Progressing” can be revised by the student and resubmitted with no penalty. Work that gets marked “No Pass” can also be revised, but the student has to spend a token to do it.
I still have some finishing touches to put on these course designs, but I like where they are going, and I don’t foresee any major changes from here on out. Once I start into the semester and work with this system on a daily basis I do expect to encounter the unexpected, so stay tuned.
As an aside, I’ve loved the conversation about grading and assessment that’s been ongoing on Google+ as I’ve live-blogged my course preparations. Theron Hitchman has been a valuable sounding board, through this Google Hangout that we had and in trading G+ posts. Bret Benesh I already mentioned, and Evelyn Lamb and several others have joined in elsewhere. The reports of G+ becoming a ghost town are greatly exaggerated.
Want to continue the conversation? Follow me on Twitter (@RobertTalbert) or on Google+, and share this article on your networks using the social media buttons above.
Image: https://www.flickr.com/photos/amacvicar/