To the Editor:
In the advice essay, “Why Doctoral Programs Should Require Courses on Pedagogy” (The Chronicle, March 16), Benjamin Rifkin, Rebecca S. Natow, Nicholas P. Salter, and Shayla Shorter suggest that students in Ph.D. programs should be required to study pedagogy. Their aim is one that anyone would agree is reasonable: We should help our graduate students be the best possible teachers they can be.
The issue alas — at least in the case of my own discipline of mathematics — is that there is very little pedagogical knowledge to provide to graduate students. What we would like to know is how to cause better learning outcomes in the college mathematics classroom. The literature though largely consists of anecdotes and reports of correlations. While these may be useful starting points for future causal investigations, they don’t themselves provide the desired knowledge. What this literature needs are more random controlled
trials (RCTs).
Some causal research in the higher-ed mathematics classroom does exist. But there isn’t much. I’ve asked colleagues in math education, my university’s center for teaching and learning, and leaders of my profession’s teaching-focused professional society for any relevant literature. They have all told me that almost no causal research exists. As an educator this is extremely frustrating.
I spent almost a decade as the leader of my university’s precalculus teaching (with multiple teachers and some 1,700 students annually). During this time I instituted daily in-class worksheets, read the educational literature, and looked for ideas that were proven to work. I was as willing as anyone to try things that might lead to better learning outcomes. I am a winner of teaching awards at two universities and am a believer for instance in “active learning” — but have never seen the causal research that would justify my belief that this is actually better than traditional lectures. What you’ll find in the literature are reports of ideas people have tried, and reports of correlations. These of course are too thin to pass on to graduate students as “knowledge.’'
I have occasionally been told that some facts appear so often in the literature that they must be true. But correlations, no matter how often reported, and no matter how large the studies are are still correlations. It is true, for instance, that shoe size in children correlates with reading ability. Even if this fact was born out in study after study, and appeared in a highly cited meta-study, it still would not be the case that giving kids larger shoes would cause better reading outcomes. No matter how enthusiastic the ed literature was for giving kids larger shoes it would still be irrelevant to improving learning (reading) outcomes.
What is needed is genuine causal research, RCTs or at least observational studies where the work has been done to identify and address potential confounding variables. When we have real knowledge of how to cause better learning outcomes in the mathematics classroom, I will be the first person to jump on the pedagogical-training bandwagon.
Until then, graduate school class time is far better spent on learning disciplinary knowledge (that is, advanced mathematics for mathematics grad students). Grad students would be better served learning more mathematics rather than the latest classroom fads. Until the causal research is done it is reasonable for each educator to chart her own course and make her own judgements. As things are, all anyone can do is dive in, talk to successful teachers, and try whatever they think might work — just like the rest of us did.
Craig Larson
Professor, Department of Mathematics and Applied Mathematics
Virginia Commonwealth University
Richmond, Va.