> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
ADVERTISEMENT
Teaching
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Everyone Complains About Evaluations. A Nobel Laureate Offers an Alternative.

By  Meg Bernhard
June 15, 2015
Carl Wieman: “It may seem surprising to evaluate the quality of teaching by looking only at the practices used by an instructor.”
Noah Berger for The Chronicle
Carl Wieman: “It may seem surprising to evaluate the quality of teaching by looking only at the practices used by an instructor.”

The list of complaints about how colleges conduct course evaluations is long and seems to keep getting longer. A survey released last week of thousands of professors by the American Association of University Professors found that student evaluations are losing much of the value they once had. Earlier research already showed that student evaluations failed to adequately describe teaching quality, and often reflected judgments about an instructor’s appearance. But if not student evaluations, what should colleges use to judge the effectiveness of teaching?

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

The list of complaints about how colleges conduct course evaluations is long and seems to keep getting longer. A survey released last week of thousands of professors by the American Association of University Professors found that student evaluations are losing much of the value they once had. Earlier research already showed that student evaluations failed to adequately describe teaching quality, and often reflected judgments about an instructor’s appearance. But if not student evaluations, what should colleges use to judge the effectiveness of teaching?

Carl E. Wieman, a Nobel Prize-winning physicist and professor at Stanford University’s Graduate School of Education, says he may have found an answer. In a paper published recently in Change magazine, Mr. Wieman suggests another form of evaluation: judging professors based on an inventory of their teaching practices. The ultimate measure of teaching quality, he argues, is the extent to which professors use practices associated with better student outcomes.

“It may seem surprising to evaluate the quality of teaching by looking only at the practices used by an instructor,” Mr. Wieman wrote in the paper. But he said research over the past few decades had established a correlation between the teaching methods used and the amount of student learning.

Under Mr. Wieman’s proposed method, instructors would be required to fill out course inventories using a template developed by him and his wife, Sarah Gilbert, a visiting scholar at Stanford. An instructor would be asked to quantify, for example, the average number of times per course that students worked in small group discussions.

A number of colleges and groups across the country have been testing Mr. Wieman’s teaching inventory. Some observers laud the evaluation system because it requires professors to reflect on and track progress in their courses. But others caution that the system may be too standardized and that it may be difficult to translate the teaching inventory into other fields of study, like the arts and humanities.

ADVERTISEMENT

‘Active’ Learning

Nearly 10 years ago, Mr. Wieman began researching and experimenting with teaching methods in science departments at the University of Colorado at Boulder and the University of British Columbia. He also studied work on best teaching practices in math and science courses to determine which categories to include in his inventory. The research literature, Mr. Wieman said, suggests the most effective teaching method for science, technology, engineering, and mathematics courses — the STEM fields — is “active” learning, in which students engage in problem-solving activities during class time.

He said he was confident that his inventory method could be translated directly to engineering, and “modestly confident” that it could be used in fields outside of STEM.

But some experts and observers wonder how portable the approach can be. After all, teaching a course in chemistry is quite different from teaching one on English literature.

For Catharine H. Beyer, a research scientist in the University of Washington’s educational-assessment office, disciplines are unique and require different methods of teaching. In an art class, she said, student learning may best be assessed by a more “hands off” teaching approach, in which a teacher assigns a project and then gives students feedback weeks later. “We need to think a lot about what we’re trying to teach in each of these disciplines,” Ms. Beyer said.

In his paper, Mr. Wieman acknowledged that he had tested the inventory in math and sciences only, and that some courses, like seminars, labs, and project-based classes, probably would not fit within the rubric. For other fields, the inventory might require some adjustment. “In developing an inventory like this, it’s got to really capture all the standard teaching methods that people in that discipline use,” he said.

ADVERTISEMENT

Other observers worry that standardizing teaching practices might detract from an instructor’s autonomy. Different professors have distinctive teaching styles, argued Geoffrey D. Sanborn, chair of the English department at Amherst College. “Education is fundamentally about the relationship between the teacher and the student,” he said, adding that in each course, faculty members bring unique teaching styles that they’ve been developing over their careers. Mr. Sanborn said he worried about potentially harmful effects if certain teaching practices became “enshrined” in a department or a university.

Michael P. Chaney, an associate professor of counseling at Oakland University, in Michigan, said that while he welcomed Mr. Wieman’s work, certain elements of instruction would be difficult to measure with an inventory.

“There’s an aspect of teaching that cannot be taught, and that’s the nontechnical skills like personality,” Mr. Chaney said.

‘Tyranny of Student Evaluations’

Mr. Wieman hopes that adoption of his inventory will lead colleges to put far less weight on the end-of-semester ritual of course evaluations, when students get to anonymously judge their professors on a standardized form. In his paper, Mr. Wieman wrote that one of his goals was to free professors from “the capricious, frustrating, and sometimes quite mean-spirited tyranny of student evaluations.”

But Larry A. Braskamp, who has researched faculty assessment and who next month will become interim president of Elmhurst College, said evaluations are “incomplete” without student input.

ADVERTISEMENT

Ms. Beyer agreed, saying that “it is wrong to not give students a voice in their own learning.”

In an email interview, Mr. Wieman countered that he would keep student evaluations if he adopted the inventory system for a course but that he would use them “more appropriately.”

But even if Mr. Wieman’s system was used alongside other means of evaluation, another concern remains. Mr. Wieman acknowledged in his paper that his system measures “the use of a particular practice, not how well those practices are being used.” He again emphasized the “strong correlation” between learning outcomes in STEM courses and teaching methods used, independent of other characteristics of teaching, like active learning.

Ms. Beyer agreed that while active learning in the classroom can work well, that does not give the entire picture. “Just using a teaching methodology alone doesn’t ensure that students are learning,” Ms. Beyer said. Others wondered whether professors would complete a lengthy survey on their course and whether they would accurately remember specific details about it.

Mr. Wieman insisted that the survey would take little time to fill out.

ADVERTISEMENT

Diane Ebert-May, a professor of plant biology at Michigan State University, said that professors were likely to exaggerate their classroom experiences. She co-wrote a 2011 paper, “What We Say Is Not What We Do,” which confirmed that self-reported data about faculty teaching is different than observational data. While not lying outright, professors tend to misremember or inflate details, she said. That issue can be resolved by balancing self-reported data with plenty of observational data, she added.

No matter what the method, Ms. Ebert-May said that more research was needed on how best to gauge learning in college.

“These tools are fine, we can use them, but I think we need to take the step and make sure students are actually learning,” she said. “There is a lot of work to do.”

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Teaching & Learning
ADVERTISEMENT
ADVERTISEMENT
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin