Karen Head is an assistant professor in the Georgia Institute of Technology’s School of Literature, Media, and Communication, and director of the university’s Communication Center. She reports periodically on her group’s efforts to develop and offer a massive open online course in freshman composition.
After months of preparation, we finally started our MOOC, “First-Year Composition 2.0,” at Georgia Tech. We are now through the first few weeks of the eight-week course, supported by the Bill & Melinda Gates Foundation. Veteran MOOC instructors warned me that the early weeks would be bumpy. The actual experience has often left me panicked—and worried that the course would not be successful. This is not like a traditional course, in which you have a day or two to deal with issues that come up in class. MOOC students expect immediate responses, and that means nearly 24/7 monitoring of the course.
I’ll begin with some of our positive experiences. One of the best decisions we made was to embed a Google Map on which we asked students to pin their locations. With more than 17,000 enrolled students, we are reaching people on every continent except Antarctica. As with all MOOCs, enrollment numbers are different from “active student” numbers. At the moment, 58 percent of our students are actively engaged. For some, this MOOC is the only way they can take such a course.
It is also exciting to see students forming communities within the discussion forums, to help one another with questions about content or technology. Our more ambitious students have developed study guides. Some self-identified writing-and-communication instructors have formed their own forum, to consider how they can use our course to teach their own students.
The most rewarding aspect of the course is the weekly “Hangout” session, live-streamed using Google Air. We invite students to join the discussions and ask questions. Finally, I get to know some of my students!
So, what hasn’t gone as planned? Certainly some things do not translate from a traditional classroom course to a MOOC. Our team realized quickly that we needed to do a better job cross-linking material on the course site. For example, if we mention the syllabus, we must link to it. Some students, we have learned, want a great deal of guidance.
We also underestimated the misunderstandings that can arise from idiomatic and discipline-specific language. We began the course by asking students to complete a Personal Benchmark Statement, only to discover that we needed to provide a definition of “benchmark.” A longer glossary of terms became a featured part of our site.
One thing I did not anticipate was that students would have trouble distinguishing between our instructional staff and the Coursera platform. Initially I found this frustrating, because I saw Coursera merely as the technology platform of our course. However, I’ve come to appreciate the deeper reasons for this misunderstanding.
For example, students have complained about not being able to complete in-video quizzes when they download the lecture videos. While our instructional team wanted to help them complete this work off-line—many students have very limited Internet access—we could not provide a way to do so. We pressed Coursera support-staff members for a solution, but they could not provide one.
When such circumstances arise, it is crucial to e-mail students to explain why we cannot resolve the problem. Most students are flexible and forgiving as long as we make the effort to be transparent. Nevertheless, it is easy to understand why students become frustrated. Having granted me the authority that an instructor would normally have, they expect that all decisions about the course are mine to make.
My limited ability to make key pedagogical choices is the most frustrating aspect of teaching a MOOC. Because of the way the Coursera platform is constructed, such wide-ranging decisions have been hard-coded into the software—decisions that seem to have no educational rationale and that thwart the intent of our course.
And, of course, there is the question of how to evaluate the writing of thousands of students. The only way to accomplish that is to rely entirely on peer review and peer -assessment. But first, as part of a required training module, students must read sample essays that are scored, and test themselves against those scores.
Because the qualitative process is central to our course, I wanted to require students to participate in peer work in order to get credit for assignments. When I wanted to make the penalty for not completing peer review a 100-percent deduction per assignment, the Coursera support team responded that the maximum deduction could be only 20 percent. Coursera acknowledged that other instructors had complained about the penalty figure but gave no indication as to when or whether the problem would be addressed. Predictably, many students have not completed the peer review, leaving others with little feedback. In my opinion, the instructor, not the platform, should determine how an assignment is evaluated.
Our team also was disappointed that we had to minimize our elaborate rubrics, which provide evaluation criteria, in order to work within the Coursera peer-assessment tool. The biggest problem was that we could not include our prepared explanations of why sample essays had been scored in a particular way.
To our dismay, however, the lack of explanations matters little, because after seven attempts to pass the training module, students are automatically placed in the peer-assessment system—even if all of their work was unsatisfactory. Students have also complained about this in the forums. Not surprisingly, they question the validity of feedback from students who failed the peer-review training. Our team would prefer to dictate that students must pass the training before reviewing the work of their peers, but again, we cannot change this.
As we approach the halfway point of the MOOC, it will be interesting to see how things progress. I will report again at the end of the course.Return to Top