With the pressure on for higher graduation rates, better retention, and more-engaged students, colleges are deploying a variety of tactics in their pursuit of student success. But whether they’re offering a first-year experience or a flipped classroom, how do they know if the programs are working?
For some colleges, the urgency to better understand how their programs affect students stems from state appropriations that are contingent on retention and completion rates. For others, tight budgets have prompted calls for evidence that programs are cost-effective and worthwhile. Still other colleges have campus leaders fond of data-driven development and analysis of campus programs.
Many of the so-called high-impact practices that colleges embrace, like learning communities and undergraduate research, are widely accepted as approaches that—if done well—have a positive impact on students. Still, campus officials increasingly want to know how effective those programs and policies actually are, and whether they need to be refined.
The California State University system, for instance, is in the early stages of an examination of programs designed to improve student success. Until recently, “a lot of the understanding of the efficacy of these practices has been anecdotal,” says Ken O’Donnell, senior director of student engagement and academic initiatives and partnerships.

Kendrick Brinson for The Chronicle
Previously, he says, it was enough to defer to the expertise of staff and administrators in student affairs and academic affairs. Now the system requires campus-based programs like peer mentoring, first-year experience, and learning communities to develop systemwide definitions and goals. Within the next year or two, Mr. O’Donnell says, those “taxonomies,” as system officials call them, will be paired with a “student-success dashboard” of campus-specific federal data on graduation and retention rates.
The resulting picture, he hopes, will provide Cal State officials with detailed information on which of their programs are working, and for whom. If all goes as planned, it will work like this: Instead of wondering how chemistry majors are faring, or African-American students, or those who take part in learning communities, officials will be able to zero in on, say, the retention and graduation rates of African-American chemistry majors who participate in learning communities.
Regardless of their circumstances or the way they carry out their evaluations, most colleges consider several factors when assessing their programs. Vincent Tinto, a professor emeritus at Syracuse University who has done extensive research on student success, sums up the central questions like this: If a program has a positive impact, does it justify the cost? If there’s no positive impact, then why do we do it? How do we judge new programs? What about those that have been in place for years? And finally: How are we doing—and what can we do better?
Unlike in a controlled experiment, it can be hard for colleges to pinpoint the root cause of any changes, good or bad, in a student’s performance. Student-centered programs rarely operate in isolation, and few institutions are comfortable designating a “control group” of students to receive little or no support.
“You would be hard-pressed to find a school in the country that says, ‘We’ve identified these 10 things that are supposed to improve retention, and we’re going to try them one at a time,’ " says Alexander C. McCormick, director of the National Survey of Student Engagement. “You have a lot of things going on at once.”
Dave Jarrat, vice president for marketing at InsideTrack, a company that works with colleges to improve student success (see related article, Page B10), says the absence of any methodology to track the effectiveness of multiple initiatives simultaneously makes it hard for colleges to figure out what works. What’s more, he says, most institutions don’t think about measurement when they’re starting a new support program.
“It takes so much effort to get the program through the political process and get it up and running that often measuring the success of it is an afterthought,” Mr. Jarrat says. He thinks colleges should embark on new ventures with a spirit of “lean experimentation and constant measurement.”
A lack of reliable data complicates matters further, he says. Federal graduation rates reflect the experiences of only a small slice of students: those who are experiencing college full time, and for the first time. So if colleges can’t track the progress of those students relative to that of their peers on basic student-success outcomes, he says, it’s hard for colleges to know whether they’re on the right path with their support programs.
Indeed, to know what works, colleges need “real, solid data” about what happens with their students, says Kay McClenney, who recently retired as director of the Center for Community College Student Engagement at the University of Texas at Austin. And colleges that tend to shy away from closing ineffective programs need to shift from a culture of anecdotes, she says, to a culture of evidence.
“Confronting campuses with that information is a continuing act of courage on the part of a president or a dean,” she says. “But that’s what we see going on increasingly around the country.”
In one measure-as-you-go exercise, Indiana State University is in the midst of evaluating a new campus policy aimed at helping low-income students stay on track to graduate. A new state requirement calls for Indiana’s 21st Century Scholars—financially needy students who receive four-year scholarships from the state—to complete 30 credit hours by the end of their first year.
So Indiana State officials decided to try an experiment this past academic year that would help those scholars reach the 30-hour benchmark by the end of the summer. They allowed students who’d completed 24 to 29 credit hours by May the option to stay on campus for the summer term to finish as many as six free credit hours of courses. Those students were also eligible for a 50-percent discount on campus housing. Of the university’s 650 freshman scholars, says Joshua Powers, associate vice president for student success, 125 were eligible. More than two-thirds of them opted to finish up the final credit hours.
In late August, officials crunched the numbers. “Is it the right kind of incentive?” Mr. Powers had wondered. “How did it work?”
They found that 70 percent of the scholars who had stuck around to finish those last few credits succeeded. Mr. Powers credits the experiment with a 2-percent increase in the retention rate of Indiana State’s freshman 21st Century Scholars.
Sinclair Community College, in Dayton, Ohio, also leans heavily on data to inform its decisions about student-support programs. The college has isolated three cohorts of students, who were enrolled in 1998, 2003, and 2008, and followed all 17,000 for five years. The college also looked at which student-success efforts were in place at the time.
“There is a bump right around the time we started these innovative programs,” says Kathleen Cleary, associate provost for student completion. More students were graduating or transferring into four-year institutions; fewer were dropping out. While it would be difficult to attribute positive gains to any one venture, she says, “I feel fairly confident that the programs line up.”
The data suggested that Sinclair’s student-success ventures, like the creation of resource centers at area high schools to help students become college-ready, and a “boot camp” of one-week refresher courses begun in 2010 for students who aren’t ready for college-level math and English, were helping. But while the metrics were moving in the right direction, the analysis also revealed a lack of coordination among programs and offices, Ms. Cleary says.
This fall Sinclair will roll out a new, holistic approach to student success that is directly shaped by those findings. The program, known as Career Communities, will document and coordinate the advice that students receive on academic, career, personal, and financial matters. It will also involve constant evaluation: How many students reached milestones of 12 credit hours, or 24, or 36? How many earned certificates, or degrees? And for students, it will include surveys featuring a key question: Is this helping you?
Constant measurement is essential, Ms. Cleary says. The average community-college student takes five years to complete his or her two-year degree, she notes. “We can’t wait five years to see if our changes are working.”