Any college leader considering a curriculum change for his or her institution has a lot of questions to ask and answer. First, what are the specific goals? To increase graduation rates? To increase particular knowledge in certain majors? And what changes in the curriculum would achieve those goals? We’ve gone through multiple curriculum reforms at the City University of New York over the past 15 years, and it’s never an easy process. Some faculty members, as well as administrators, can be skeptical and resistant to change, and resources to carry out the reforms are hard to obtain. One of the most important things we have learned during that time is that relevant, clear data can help you make better decisions about curriculum reform. That means you need to put a premium on data — both collecting it and analyzing it.
Describe your vision, build trust, and frankly discuss faculty’s fears, leaders advise.
First, you will want descriptive data, which will allow you to see, for example, that while the grades in English 101 are relatively low, the ones in English 102 are relatively high. That’s a start, but you will need much more information — to identify some of the many possible reasons for the grade difference, and to construct and test hypotheses about what might be causing it.
You should, of course, ask the faculty for their opinions. Beyond the fact that consulting people about an initiative helps to gain their support, faculty members can provide extremely useful guidance on what aspects of the curriculum can be improved. But individual opinions do not necessarily constitute objective evidence. Studies have shown that people’s observations, even about seemingly obvious actions such as whether or not a rat solved a maze, can be influenced by what people expect should happen. And faculty members, of course, also have an inherent conflict of interest regarding their curricular views — their curricular decisions can directly affect their work lives.
Therefore, you also need qualitative and quantitative data, collected and interpreted appropriately using behavioral science. Trained data analysts — either outsiders or your institutional-research staff and faculty members who have been trained to measure behavior and analyze data — can determine, for example, if the higher pass rate in English 102 is due to lower-performing students dropping out between English 101 and 102.
Similarly, such analysts will understand that assessing the effect of a new course on student learning solely by looking at student grades in a single section of that new course provides little information. You will not know if the grades are a function of the teaching abilities of that particular faculty member teaching that particular course, if that faculty member has typical grading standards, or if more motivated students took that new course. As another example, suppose new general-education requirements and a new advising system were instituted at the same time that student retention increased. Data analysts will understand that the increased retention rate may have been due to the new general-education requirements, the new advising system, or another variable altogether.
Individual opinions do not necessarily constitute objective evidence.
One way to avoid such problems is to conduct randomized controlled trials, randomly assigning students to one type of curriculum or another, and allowing only the new curriculum to vary between the two groups. That will allow you to determine the causal effects of the new curriculum. Using a randomized controlled trial at CUNY, we obtained higher pass rates with corequisite math remediation than with traditional math remediation.
However, randomized controlled trials can be time-consuming and expensive, so you might want to use quasi-experimental statistical techniques. For example, you can compare the performance of a group of students exposed to a new curriculum with that of another group exposed to the old curriculum, matching the two groups on every measurable student characteristic (e.g., SATs). Such strategies cannot account for unmeasured student characteristics, such as motivation, but they provide more information than simply looking at student performance in the new course.
This all assumes that your institution needs to collect all the relevant data itself. However, you can — and should — use research conducted by others. For example, in deciding whether your goal should be to increase graduation rates, you should compare your institution’s rates to those of similar institutions. In deciding whether a curricular innovation done somewhere else might be useful at your institution, someone skilled in evaluating data can see if that innovation is supported by evidence collected in a variety of settings, thus increasing the probability that it will apply to your own. For example, corequisite mathematics remediation has been shown to be more effective than traditional remediation in different degree programs and different institutions, with students assessed as needing remediation taught together with other students or not.
Identifying a problem, collecting and analyzing relevant data, devising a solution, and testing it is a very long procedure. You may not complete it during your tenure, and your successor may not want to. Plus, it guarantees there will be many years in which students will not have the best curriculum. Employing the data of others can speed the decision to implement an improved curriculum.
The implications of those data, though, will need to be adapted for your institution. For example, a key element of CUNY’s ASAP program, which a randomized controlled trial has shown doubles associate-degree graduation rates, is free intracity transportation. This would not help rural students, so adapting ASAP for other locales may mean devising alternative ways to help students avoid transportation costs. Successfully adapting the research of others is itself an important skill, one that will get you to your goal of helping students faster.
Only 15 years ago, there was very little rigorous higher-education data, but since then, the use of randomized controlled trials, other rigorous data-collection strategies, and evidence-based decision making in general has grown substantially. Administrators considering curriculum reform should recognize the importance of behavioral data and the need for trained analysts who can effectively generate and evaluate such data. These administrators will be in a better position to make decisions about when and how to revise the curriculum, and that can only benefit students.
Alexandra W. Logue is a research professor in the Center for Advanced Study in Education at the Graduate Center, City University of New York, and former executive vice chancellor and university provost of the CUNY system. She is the author, most recently, of Pathways to Reform: Credits and Conflict at The City University of New York (Princeton University Press, 2017).