In a recent Chronicle Review essay, Jonathan Zimmerman called the coronavirus outbreak a natural experiment during which colleges can, at last, perform research about the effectiveness of online learning. He claimed that the jury is out about “whether the move to online learning is good or bad for students” because research to date is “marred by the problem of self-selection” and because colleges sell the convenience of online study while papering over how “people with less academic opportunity and skill were likely to suffer more from online instruction.”
Based on those assumptions, Zimmerman called for research to take advantage of this “set of unprecedented natural experiments” in which everyone is now “compelled to take all of their classes online.” But this call for research rests on assumptions that are at best contentious and at worst untrue.
So far, Zimmerman noted, “no college has committed to using this crisis to determine what our students actually learn when we teach them online.” There is a good reason why. He acknowledges it himself when he says that “the abrupt and rushed shift to a new format might not make these courses representative of online instruction as a whole,” and that instructors thrust into remote instruction with little or no training, prep time, or support will “probably be less skilled than professors who have more experience with the medium.” Zimmerman brushed those concerns aside, however, as “the kinds of problems that a good social scientist can solve.”
But these are not solvable problems. They are evidence against Zimmerman’s call to research. They indicate strongly that remote instruction — moving face-to-face courses online with little preparation and training — is something qualitatively different from online teaching — purposely designing course interactions to offer learners choices, engagement, and meaningful connections. Online courses that have been designed over time, according to agreed-upon best practices, like the Quality Matters standards, have been shown, in study after study, to serve learners as well as or better than their face-to-face counterparts.
In our book, Evaluating Online Teaching, my co-authors and I focused on the impact of throwing untrained or poorly trained instructors into online environments. The short version: It ain’t pretty, and it ain’t effective. Good online teaching requires intentional design and intentional practices that share some things with face-to-face teaching, but which also require significant mental and procedural shifts on the parts of both instructors and students.
As colleges and universities have struggled to devise policies to respond to the quickly evolving situation, here are links to The Chronicle’s key coverage of how this worldwide health crisis is affecting campuses.
It is odd that Zimmerman thinks we can somehow correct for the fact that remote instruction is staffed by hurried instructors whose work will not represent online teaching as a field, especially since he also believes we could not correct for the self-selection biases in traditional online courses. He can’t have it both ways.
In short, the sudden shift to remote instruction in response to the coronavirus pandemic is not comparable to the online-course offerings in any other time period. Or, as Kevin Gannon has tweeted, comparing remote-instruction programs with fully developed online courses “is like deciding to give people a swimming test during a flood.” The two types of instruction are apples and oranges.
That doesn’t mean we can’t collect data from the crisis to improve what colleges do. But instead of trying to study the effectiveness of online education, we should study the effectiveness of colleges’ emergency-response plans. Some excellent questions to ask when this is all over might include:
- What staffing levels in various areas of the institution are needed in order to respond to emergencies?
- What new structures, teams, and skill sets are most important to establish in order to prepare campus constituents for emergency readiness, and then for continued alternate-format operations?
- How do students under emergency conditions seek support, together create “new normal” ways of learning, and take active parts in their studies?
- Where did we lose students, either temporarily or permanently, and can we correlate factors that seem to accompany such disconnections?
We can help answer these questions by continuing to collect data through student ratings, peer observations, and other assessment methods, but agreeing not to use the data toward any employment decisions like promotion and tenure. Most course-rating instruments ask students about their perceptions of the instructor, the way the course is offered, and the level of access to materials, help, and support. Those are good things to ask about during moments of crisis, and will provide us with rich information for when we can do a review of what went well and where we want to shore up our urgent-action plans.
We are all now teaching in unusual times. The issue of observation and assessment of teaching quality during a fundamental shift in campus procedures is fraught. Few people will be doing their most effective teaching under shove-it-online circumstances. This isn’t the time to evaluate the worth of instructors, and it isn’t the time to measure the effectiveness of online learning.