The evidence against the effectiveness of student evaluations as a way to measure instructional success or gather feedback for redesigning courses is mounting. However, while many institutions have established peer or faculty center observation programs for getting more direct feedback on teaching, online courses can feel more isolated. It’s not easy to invite a fellow faculty member to “observe” an online class in the traditional sense, and getting immediate feedback from students can be difficult when they are physically removed and behind their own computer screens. If you’re teaching an online course and looking to improve your students’ experience, it can often fall on you to get support and useful assessment.
Several universities have implemented programs for peer review of online courses: Penn state has a process for peer review of online courses that can be adapted easily. The emphasis of their process is on holistic feedback, looking for places where faculty members make themselves accessible using the digital tools, assignments and spaces that encourage student interaction, opportunities for active learning, methods and speed of feedback, and other signs of an engaged online classroom. This can actually be more valuable than the traditional classroom observation (which usually only involves viewing a single class and can thus provide only a limited and distorted view of what the teaching methods look like on a larger scale) but it’s also a very demanding and time-consuming process. (You can read more about the use of this process at Penn State at the Online Learning Consortium.)
These methods of evaluation can potentially overlook some of the aspects of online teaching that rarely play as large of a role in face-to-face teaching: namely, interface and interaction design. Few faculty have much say in some of these decisions, which are made the moment the campus commits to a particular platform for online teaching. Learning management systems determine a great deal of the look and feel of a course, and their structures likewise enforce certain methods of organization. Despite this, two professors using the same learning management system are likely to use entirely different strategies for translating their approach to teaching to the building of a course. This is particularly true for courses like the ones I teach, which are rare examples of online courses in a program that is primarily face-to-face and thus are not the product of dedicated online faculty. Useful online observations thus need to consider not only the course content and engagement, but how the material is presented and designed. Here are my priorities when reviewing online course materials through the lens of user-centered design:
Clear central hub. Students need a starting point from which every trail should be clear and easy to follow. Some systems create or determine the look of this central hub, but in systems like Canvas the professor has the flexibility to set the main page for students once they land on the site. (As faculty, it is essential to use the “student view” when building this page–often, many options that are visible to faculty won’t be visible to students!)
Organizational consistency. Learning management systems often have multiple ways to share content, with pages, announcements, blog posts, discussion forums, and assignments often existing in their own categories. Faculty need to determine which of these tools make the most sense for their class and then use them consistently: one project shouldn’t be hidden in an announcement while another is in the system’s assignment category.
Accessible use of mixed modalities. Teaching online offers lots of opportunities to bring in multimedia: I like to combine video, games, movie clips, photos, and memes liberally. However, in our enthusiasm to include lots of different experiences online, we as faculty can often forget to prioritize universal design and accessibility. Most institutions offer resources for evaluating your course materials (here’s ours at UCF), but getting an outside assessment can be invaluable.
Evaluating these elements of an online course, in addition to the content itself, can be accomplished using rubric-based assessment such as the Penn State example or through less formal methods. One illuminating strategy I recommend is asking an outside observer to navigate through the course while screen-casting and discussing their experience and confusion as they explore.
What are your strategies for conducting peer observations and getting teaching feedback in online courses? Share them in the comments!Return to Top