> Skip to content
FEATURED:
  • Student Success Resource Center
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
ADVERTISEMENT
Profhacker Logo

ProfHacker

Teaching, tech, and productivity.

  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Asking Students To Revise Your Syllabus

By  Brian Croxall
May 10, 2012

picture of a toy robot

About a month ago, Inside Higher Ed reported on a study (PDF) conducted at the University of Akron on automated essay scoring software. The researchers compared the performance of the software with that of trained human graders on a sample of 22,000 essays. Surprisingly (or not–it is, after all, the 21st century), the Akron team found the differences between computational and human scoring to be minimal.

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

picture of a toy robot

About a month ago, Inside Higher Ed reported on a study (PDF) conducted at the University of Akron on automated essay scoring software. The researchers compared the performance of the software with that of trained human graders on a sample of 22,000 essays. Surprisingly (or not–it is, after all, the 21st century), the Akron team found the differences between computational and human scoring to be minimal.

Of the many responses to this article, the ones that struck me most were the ones that critiqued not the ability of the software but the type of writing that they are asked to grade: standardized exams. The University at Buffalo’s Alex Reid perhaps put it best, “If computers can read like people it’s because we have trained people to read like computers. [...] And FYC [first-year composition] essays are perhaps the best real world instantiation of the widget, the fictional product, produced merely as a generic example of production. They never leave the warehouse, never get shipped to market, and are never used for anything except test runs on the factory floor.” The problem that computerized grading exposes, in other words, is the fact that we often ask our students to create work that isn’t connected to the outside world, to something more than a rote and remote exchange between them and us.

Shortly thereafter, I found myself having a conversation with my colleague Roger Whitson about these articles (and a talk we were preparing on the open humanities). He is planning a new course for the fall, and in an effort to help connect his students’ work to reality, he is planning to introduce them to Omeka and then ask them to curate the course. His goal will be for his students to think through how they present the content of the class and the work that they have done to a larger audience. By translating the work of a semester into a new format, he hopes that they will better synthesize what they’ve learned. (Roger’s interest in pedagogy have led to a couple of guest ProfHacker posts, such as a report on THATCamp Pedagogy and an issue of the Teaching Carnival.)

Roger’s idea sparked a different one in me. Instead of connecting students to the wider public, I’ve been wondering about directing their work at future students by inviting them to rewrite the syllabus for the course that they have just completed. I’m not envisioning the simple three-sentence response that I see on evaluations either. Instead, I’m planning to ask students to redesign the course reading for an entire semester, coming up with new secondary reading material and even consider new primary texts to help teach the concepts of the course. Such a project can lead the students to much of the same original research in a field that they would do for a standard research project, but puts it into a finished product that--let’s face it--has a much better chance of making an impact on the world. I think such an assignment would work best in an upper-level course where majors would already have a sense of the field. In my particular case, I’m planning this assignment for a digital humanities course, where the most important material to cover in a semester will be constantly in flux.

ADVERTISEMENT

As I’ve mentioned this idea to others (via the inevitable Twitter), I’ve heard from a number of people, including ProfHacker’s own Erin Templeton, who have tried similar assignments in the past. With the end of the semester here, however, I want to open the question to the ProfHacker community. Have you asked students to revise your syllabus? If not, how do you connect your students’ work to the real world? If so, what worked and what didn’t? Let us know in the comments!

Lead image: robot filtered / David DeHetre / http://creativecommons.org/licenses/by/2.0/

ADVERTISEMENT
ADVERTISEMENT
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin