Innovate, but Don’t Rock the Boat

Written with Mary Churchill

Mike: Education departments have been denigrated for a long time, often based on the claim that they make a fetish of process and do not adequately take substance into account. But there is a different reason for the defensiveness that often accompanies that judgment: It is primarily in education departments and rarely in other disciplines that faculty are most likely to discuss the relationship between teaching and learning.

Mary: This is related to the fact that so many academic departments seem to devalue teaching. We actively recruit and hire junior faculty members who are able to teach in innovative ways: utilizing global outreach, service learning, and new technologies. But we fail miserably at promoting and retaining these faculty members. We hire them for the “differences” they bring (significantly, many of these new hires are women and/or racial/ethnic minorities), but then we can’t deal with their innovations — particularly when it comes to evaluation.

Too many departments are unable or unwilling to change. Instead, they reward sameness. They reward teaching and scholarship that looks familiar, that looks like their own teaching and scholarship. One reader commented that there is little room for innovation in teaching. Junior faculty members are often afraid to take risks in their classrooms. They fear negative teaching evaluations and the “I told you so” looks and comments from senior faculty. They are afraid of hurting their bids for tenure or the renewal of their contracts.

Mike: But the sort of tenure evaluation that undervalues teaching did not originate with faculty. It was, and is, increasingly imposed by administration as part of establishing a certain reputation for the institution. I don’t mean that senior faculty members don’t support those criteria, only that their support may reflect a desire to be associated with a “ranked” institution.

At the same time, there are intrinsic difficulties in evaluating innovative teaching. All of this can contribute to what I consider an unwarranted and at best exaggerated suspicion on the part of some faculty that what is called innovative teaching is really nothing more than a matter of trendiness rather than substance.

Mary: Mike, I couldn’t agree more. These new faculty positions often stem from administrative initiatives focused on cultural change or institutional transformation. This is problematic from the start, and although departments welcome the additional teaching staff, they rarely “buy in” to administratively imposed enterprises, in either scholarship or teaching. Before approving this type of hire, both administration and faculty should be required to outline the ways in which these types of innovations will “count” in tenure and promotion decisions. If that change doesn’t occur, the new hire is set up for failure.

Mike: Administrative oversight in this regard is likely to narrow what is considered “innovative,” and, at the same time, fail to respond to the fact that certain pedagogical innovations are necessary features of developments in research. But even more important is the fact that some of these innovations have to do with the sort of teaching we began discussing in earlier posts: where the teacher is a learner and the learner a teacher, and where the logic of discovery is emphasized more than the logic of justification.

Return to Top

2 Responses to “Innovate, but Don’t Rock the Boat”

  1. psel8105 says:

    Oh please. The vast majority of tenure cases are successful even allowing for higher attrition at a handful of R1s — which sort of undermines your points about faculty finding teaching innovation too risky. And, in my experience new faculty at R1s are happy to have gotten those jobs because then they don’t have to unlearn internalized notions about how unimportant teaching is. While new faculty at other sorts of institutions might come in with new ideas/innovations about how to teach, often it takes them a while to adjust to the reality that the students they are teaching have different skills sets and challenges than the priveleged undergrads they encountered at their PhD alma maters.

    I know you all like the meme that new faculty are brimming with innovation of all sorts only to have it snuffed out by the deadwood senior faculty, but that set of stereotypes is as useless as my suggesting that most new faculty, no matter what kind of job they end up in have to unlearn the value system they are taught at their PhD institutions about the relative merits of teaching and research.

  2. gglynn says:

    I just finished writing a white paper on Assessing and Evaluating Teaching and Courses that I believe is relevant to this article. You can download the article at

    Background & Assumptions
    Note: In this document I use the term evaluation to mean data collected to assign a score and assessment to mean data collected for the purpose of quality improvement that is not used to evaluate.

    Attempts to improve a course often result in a short term drop in student satisfaction and course evaluation scores. In addition, the amount of work and rigor expected of the students is generally inversely correlated with evaluation scores. However, in retrospect, the student may look back on these courses as their most valuable. Using this data as a measure of course quality is therefore counterproductive over the long term. What is most productive is the amount of ongoing faculty effort applied to improving the learning experience. From an institutional perspective it is better to initially have a poor instructor who is constantly improving than a mediocre instructor who does not. To encourage this effort it should be planned, measured and rewarded as part of annual faculty evaluations and in the Rank and Tenure process.
    Students are more likely to provide honest and constructive feedback on their learning experience if it will benefit them in the courses in which they are currently enrolled or contributes to a system that helps them make informed decisions about course and instructor selection. Faculty are more likely to ask for, and act on assessment input, if it is private and not used to evaluate them.
    Course and instructor evaluations systems like are readily accessible and very popular with students. They are used extensively to make decisions about instructor and course selection but provide a shallow and incomplete picture. Raters tend to be from the extremes of the spectrum – either believing the instructor walks on water or is the worst they have ever encountered.

    Proposed Solution(s)
    • Evaluation of faculty in their roles as educators should be based on their efforts to improve the quality of the educational experience and not on student ratings.
    • Course assessments (surveys, focus groups etc.) should occur early in the course where they can influence the outcomes for the currently enrolled students. The questions should focus on providing feedback to the instructors and should be available only to them or professional staff assisting them.
    • Course evaluations should be performed at the end of a course and should focus of providing future students with information about the course and instructors. This information should be available to instructors and all students registered at the university.

    • Faculty evaluation process.
    o Faculty should be encouraged to develop an ePortfolio. These systems enable them to selectively grant access to various portions of the portfolio to the department chair, P&T committee (many now use this as the submission process), students, potential employers and even members of the public such as friends and family. A section of the portfolio should be dedicated to their performance as educators. Within this section faculty could write annual teaching improvement plans (TIPs). They could also selectively document the feedback that they received from students during the course assessment and evaluation process, reflect on their teaching and the actions they had taken in the year to improve their courses and experiment with new approaches. This information could be discussed with the department chair at each annual evaluation meeting and could then be used to decide on an evaluation score.
    o TIPs could be shared with the campus faculty development organization so it can customize its offerings and services based on faculty need and inform faculty about events that are relevant to their plans for the year (see related white paper on formalizing the faculty development process).
    • Course assessments
    o Students generally have a good sense of what is working or not within the first few weeks of a course. Assessments should therefore be conducted fairly early in the semester to have the biggest impact on the course.
    o An online survey and reporting system ensures rapid data return. A pool of rigorous questions linked to strategies to improve the area of investigation could be used in addition to instructor authored questions. Studies report that when students can complete the survey in their own time the quality of the comments improves.
    o It is good practice for the instructor(s) to discuss the feedback, and planned modifications of the course based on it, with the class. It is also relevant to discuss why you disagree with their input and are not making a change, as understanding the instructors perspective will often eliminate a problem. This feedback reinforces for the students the value of participating in the process and significantly increases satisfaction and engagement.
    o Surveys are a good tool to identify issues but for a more complete assessment, or to follow up on the survey, an in-class focus group conducted by an impartial assessment specialist produces more specific and actionable input.
    • Course evaluations
    o Survey questions should focus on factors that are important from the student perspective although many of these will also provide valuable input to the instructors. Examples of these types of questions are:
     The level of intellectual challenge was (Very high – Very low)
     The course changed my perspective (Agree)
     The course was (Fun – Very dry).
     The content of the course matched my expectations. (Strongly agree – Strongly disagree)
     The content of the course was relevant (Agree)
     Compared to other classes I have taken at Stony Brook the amount of work required was (Very high – very low)
     For this course the average time I spent per week
    • In class
    • In labs
    • Doing homework
    • Studying
     The learning experience was active (involved group work, discussions, hands on experiences etc.) (agree).
     The learning experience was well designed (Expectations were clear, homework reinforced class content, learning activities were appropriate etc) (Agree)
     The learning resources (text books, journal articles etc.) that were recommended/required by the instructor were helpful to my learning.(agree)
     I found the following non-instructor recommended/required learning resources helpful (Text box for recommendations)
     Attendance in this class was (important).
     I found the pace of the course (too slow – too fast)
     The instructor cared about my success in this course (agree)
     Instructor feedback on assignments etc. was helpful (agree)
     If I had difficulties with, or questions about the material the instructor was helpful (agree).
     The instructor(s) grading of tests and my work was (Very fair – Very unfair).
     The instructors were available either via email/phone/after class/in office hours to assist me. (agree)
     Assuming this course was not a requirement for my program of study and I could freely choose whether or not to take it, the probability that I would enroll is (Very high).
     How would you characterize the testing style of this instructor.
     What I liked most about this course
     What I liked least about this course
     I would give the following advice to other students planning to take this course (It would be helpful if students could rate the usefulness of this advice so that good information could float to the top of the list of comments)
    o Process
     Students are less likely to provide honest input if they feel they can be identified by university personnel. To increase student confidence of anonymity the system should be hosted and managed by a non-university organization which commits to anonymity.
     Unfortunately, an anonymous system increases the possibility that the comments section of the public evaluations could contain defamatory remarks. To reduce this possibility, students should be held accountable for what they write. Instructors should therefore be able to complain to a student judiciary board which can evaluate the comments and identify their origin. The course evaluation system should therefore have data that identifies the submitter and, for justified complaints, the service provider could provide the identity to the student judiciary board.
     There are many national question sets e.g. the Idea survey, and questions provided by online course evaluation companies that could be used to benchmark against other institutions.

    • More faculty will participate in course assessments and engage in activities to improve their courses when the assessment and evaluation processes are separated.
    • Public course evaluation data will provide a more rigorous method for student decisions on course and instructor selection than is currently available on sites like
    • Faculty development initiatives will focus more on known faculty needs.
    • The quality of learning experiences and student satisfaction will improve.