“Big data” has become the rage among America’s community colleges.
The promise of big-data systems is that predictive analytics will help educators better understand where students get stuck, helping them intervene more effectively to support students at risk of failure. These systems come with a steep price tag: generally $100,000 and higher. Other rarely anticipated outlays include system supports such as annual updates, maintenance, secure data storage, and customization. There are staff costs as well. Local staff have to enter data, check for data quality, and ensure that users have access to the system. And educators have to organize to review results and make use of the data.
At some community colleges these systems are working well and living up to their promise. These systems have the capacity to provide more sophisticated analyses to help educators understand the challenges facing students. However, colleagues who have worked to put these systems in place tell us they are frustrated by a lack of support to manage them and get useable information into the right hands when needed. Even when that is accomplished successfully, it is not enough. What is not addressed with these systems is the same problem data solutions have struggled with for decades: They ignore the fact that human beings have to derive meaning from the data in order to bring about change.
Colleges want to track students and help them succeed, to find out what works in the classroom, and to measure professors’ productivity. Read a special report that unpacks what big data can and can’t do.
Efforts to improve data use have tended to focus solely on making improvements to the mechanics of an information system or on professional development to improve users’ data literacy. However, these approaches have rarely improved student success. In response to this need among educators and educational institutions, we developed a model for data use that goes beyond mere analytics and training.
In our forthcoming book on improving data use among community colleges, we explore how recent research in neuroscience, psychology, behavioral economics, and organizational change can be integrated to help us reframe data use. The three-component model we’ve developed includes: (a) analytics, (b) human judgment and decision-making, and (c) organizational habits.
We argue that analytics, although a crucial component, is not enough; and certainly not the way community colleges currently present data. Canned research reports run to dozens of pages of tables full of rows and columns of data. Furthermore, we have reviewed reports from colleges in which upwards of one-third of the cells in these tables contain zeros. There is no narrative in such reports, and educators must go on a fishing expedition to identify the issues and where to act. We argue it is essential to focus on both what matters and what is in the institution’s control.
This is why our model is driven by both leading and lagging indicators. Too often colleges focus on lagging indicators, those indicators that occur at the end of a process. Degree, certificate, and transfer rates are common lagging indicators — the big goals for which funders and accreditors hold colleges accountable. But can colleges influence these directly? Our experience is that they cannot. And these statistics only report on the survivors who end up graduating or transferring and give no information about students lost along the way. On the other hand, leading indicators, such as in-class retention and course success (C grade or better), which influence lagging indicators, are actionable with research-based interventions and supports.
Odessa College, in Texas, whose story is highlighted in our book, focused deeply on in-class retention (students who stay in a class to its completion and do not drop out), which served to increase course success for all students, nearly close the achievement gap between students of different ethnic/racial backgrounds, improve term-to-term persistence, and increase the graduation rate by 65 percent.
But good analytics are not enough. Human beings must turn data into meaningful information they can act upon. Our current reporting systems do not make this easy. In the dozens of colleges where we’ve worked, even with sophisticated systems and high-end display tools, understanding the data is still a challenge. To solve this problem we need to apply what is known about how people make judgments and decisions. Behavioral economics research helps us understand how to present the information so that educators can ask the right questions about what it means and make decisions that lead to changes in policy and practice. We recommend that any data that is disseminated must be only in support of improving student success or because a college has to report the data for compliance. Data presented “for information only” misses the point.
We also must acknowledge the historically ingrained habits that plague educational institutions. Too often at community colleges, data is presented as a lone item in a packed agenda instead of as the focus of meetings for decision-making. When data is presented in this way, it is discussed only briefly, and devalued. Instead, we recommend that every meeting in which a decision must be made related to student success begin with data about the issue. This grounds the participants in a common understanding.
Colleges considering a big-data solution have the potential to employ sophisticated analytics to better understand the challenges students face and make changes in policy and practice to improve outcomes for all student populations. But these systems alone cannot increase student success.
Brad C. Phillips is the founder, president, and chief executive of the Institute for Evidence-Based Change, in Encinitas, Calif.; Jordan E. Horowitz is the institute’s vice president. Their new book, Creating a Data-Informed Culture in Community Colleges: A New Model for Educators, is due out from Harvard Education Press in the fall.