The first speaker in the Model-Eliciting Activities (MEA’s) session Monday morning said something that I’m still chewing on:
Misunderstanding is easier to correct than misconception.
She was referring to the results of her project, which took the usual framework for MEA’s and added a confidence level response item to student work. So students would work on their project, build their model, and when they were done, give a self-ranking of the confidence they had in their solution. When you found high confidence levels on wrong answers, the speaker noted, you’ve uncovered a deep-seated misconception.
I didn’t have time, but I wanted to ask what she felt the difference was between a misunderstanding and a misconception. My own answer to that question, which seemed to fit what she was saying in the talk, is that a misunderstanding is something like an incorrect interpretation of an idea that otherwise is correctly understood. For example, in calculus, if we ask students to find the absolute minimum value of a function, and they find the zeroes of the first derivative but just give the x-value and not give the function value, that’s a misunderstanding. The student knows the concept but didn’t understand what the problem specs were.
On the other hand, a misconception is an incorrect formation of a concept at the foundational level. The student has literally conceived the concept incorrectly. This is like when you ask calculus students to perform the optimization task above and they set the original function equal to 0 and do some algebra. The concept formation in the mind of the student is wrong on a basic level.
If that’s right, then I totally agree with the presenter. A misunderstanding can be cleared up in a single clarifying question or targeted piece of feedback. Clearing up a misconception requires undoing potentially years of layer upon layer of misconceived ideas. Her idea of tying confidence levels to MEA responses is a good way to get at these. So is using clicker questions -- and I’d argue much more timely, as you can catch misconceptions with clicker questions almost in real time, whereas MEA’s usually are implemented over a 5-week time span and there may or may not be formative assessment of student work in the interim. Better yet, clicker questions with confidence levels attain the best of both worlds -- something like this:
If f'(a) = 0, then f attains a local extreme value at a.
a) True, and I am confident
b) True, but I am not confident
c) False, but I am not confident
d) False, and I am confident
I’m not the first person to think of this. It’s interesting to see how many confident wrong answers you get on the first round of voting in a peer instruction session on questions like this -- even moreso if you see them on the second round.
At the very least, it’s worth noting that not all incorrect student responses are created equal. Some are just mis-readings of a task, some are brain-fart numerical mistakes, while some are symptoms of deep issues that require careful reworking. We have to make learning tasks that cover, and identify, all of those contingencies.
Image: http://www.flickr.com/photos/esparta/