I wish Jonathan Swift could come back to earth, sit side-by-side with me in front of this keyboard, and guide me as I write this post. Since that’s not going to happen, I’ll write this without the attitude it deserves. (Where, oh where, is my secretary when I need her?)
At the most recent faculty meeting at my college, I listened patiently (well, sort of) while a very smart, well-intentioned, and hard-working colleague made a presentation about how our college is going to “measure” the specific “learning goal” of “critical thinking” — one of several “learning goals” that we faculty, in fulfillment of our mandate to participate in “outcomes assessment,” identified as part of the purpose of our distribution courses.
I’ll skip the details of the presentation (it’s no fault of my colleague, but there’s not only no way to make a silk purse out of the sow’s ear of outcomes-assessment practices, there’s no way to make outcomes-assessment jargon worth listening to) and go straight to the summary. Our plan of attack is to collect “meaningful data” on “critical thinking” as demonstrated in a select group of distribution courses. After we’ve gathered this “meaningful data,” we’ll figure out if students are learning the “critical thinking” we’ve identified as one of our “learning goals.” In practice, this means that student papers will be randomly selected, analyzed, and then analyzed again, a year or two later, for evidence that our students are improving (or getting worse) in their “critical thinking.”
I know what “critical thinking” is (although I have to admit it took me a long time to figure out why plain, old-fashioned thinking — the kind used by such inferior thinkers as Aristotle, Fibonacci, Descartes, Hume, or Poincaré — no longer worked). I also know what data are. “Hel-lo,“ as my students would say: Data are information — often presented in the form of statistics or lists.
Without interpretation, data are meaningless. To become “meaningful data,” they must be gathered purposively and sifted through a conceptual scheme generated from outside the data themselves. Data are most meaningful when they’re obtained and interpreted rigorously — i.e., obtained using the methods of the hard sciences and mathematics. Data should always be interpreted carefully, and applied narrowly, the way scientists and mathematicians handle them.
The inherent flaw in relying on data is that their nature is to seduce their handlers, and their readers, into believing they’re seeing “truth.” Lists, columns, and charts, no matter their relation to reality, and no matter how sloppily gathered the data were out of which they’re composed, always appear to sum up the truth.
Moreover, when data are collected on those aspects of human beings involving quality rather than quantity — for example, something like “critical thinking” — the result is that the thing that’s purportedly being analyzed is likely to be lost in the process. When it comes to quality, the whole is always, without exception, more than the sum of its parts. (Love, to take an obvious example, is always more than recordable measurements of pulse, blood flow, and the observable actions of lovers in the throes of love.)
Let’s assume my college successfully collects and gathers writing samples from students and then has a committee look over these samples to check for “critical thinking.” They’ll most likely assign numbers from, say, 1 to 5, to rank the writing samples according to the evidence for “critical thinking” found in them.
Let’s then assume the committee successfully comes up with some data, based on numbers they assign to the writing samples, and compares these data with the data they obtain from assessing writing samples they collect a couple of years later. (Note that this is very similar to assessing how grades change over time, and that had faculty been doing their jobs these past years, grades — a form of assessment that allows for quality as well as quantity — would do the trick without the smoke and mirrors of outcomes assessment; but that’s another story.)
I see it already — the report, neatly charted out, demonstrating that there’s been a 7.6-percent increase in the “critical thinking” on the part of our students — or, conversely, a 7.6-percent decrease in their “critical thinking.” Following outcomes-assessment protocol, we will then “close the loop” — either by taking steps to improve the decline in our students’ skills of “critical thinking,” or, conversely, by concluding that our students are doing a very good job at “critical thinking.”
Although some people complain about the onerousness of outcomes assessment, that’s not a significant problem. And we all want to improve “critical thinking” (or at least “thinking.”) The problem here lies in the pseudoscientific measurement of things that simply are not measurable by numbers nor translatable into data, but instead defined by their quality. To “measure” quality is absurd. The verb we need to use is judge — a very different action that many people apparently find repugnant.
What are the real indices of “critical thinking” in college students? The “critical thinking” we’re trying to track, using data that are collected without any rigor, can’t be found in any data. Moreover, the bad writing of many of today’s college students will easily be mistaken for a lack of “critical thinking.” It’s very plausible that many of our students, although writing very poorly, exercise critical thinking very well — in such actions as dropping and adding courses according to their analysis of which courses teach them something important and which don’t.
Now that outcomes-assessment practices are humming along, and we university professors are flapping our wings in the mud of actual assessment practices, we should take responsibility for what we’ve done. We should, at the very least, own up to the fact that we are the ones who need help with critical thinking.