Acquiring expensive data tools does not guarantee real improvement
By Mark SalisburyApril 9, 2017
Despite all the cheerleading that seems to have accompanied big data’s arrival on campus, its promise to transform higher education continues to surge well ahead of its supporting evidence. That is not to say that thoughtful gathering and interpreting of information can’t be a powerful tool in the effort to help colleges improve. But the letdowns of the last overhyped remedy for higher education’s shortcomings (MOOCs, anyone?) ought to remind us that we would do well to genuinely understand the limitations and caveats of big-data analytics before jumping on a bandwagon that turns out to be in the wrong parade.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Despite all the cheerleading that seems to have accompanied big data’s arrival on campus, its promise to transform higher education continues to surge well ahead of its supporting evidence. That is not to say that thoughtful gathering and interpreting of information can’t be a powerful tool in the effort to help colleges improve. But the letdowns of the last overhyped remedy for higher education’s shortcomings (MOOCs, anyone?) ought to remind us that we would do well to genuinely understand the limitations and caveats of big-data analytics before jumping on a bandwagon that turns out to be in the wrong parade.
Although specific definitions vary, big-data analytics generally combines the ability to link disparate data sources, apply quantitative methods of analysis, and convey results interactively. However, at its core, it is only a tool that might help an organization’s efforts to improve. For example, colleges can already apply statistical analyses to identify the types of students who are less likely to persist to the second year or to graduate in four years, or even (if the institution is set up to capture more-granular data) to use support resources more frequently.
But acquiring the ability to deploy big-data analytics doesn’t guarantee anything. The first mistake colleges make is to conflate acquiring an expensive tool with achieving demonstrable and sustainable improvement. When a college’s leaders lack a clear understanding of what big-data analytics can and cannot do, this newfangled tool can end up draining an organization of time, money, and morale.
Colleges want to track students and help them succeed, to find out what works in the classroom, and to measure professors’ productivity. Read a special report that unpacks what big data can and can’t do.
Moreover, a naïve allegiance to big-data analytics can subvert the very improvement that institutions hope to achieve. At the very least, an uncritical approach may predispose some to see causation where there is only correlation. Worse, others might succumb to the more detrimental assumption that getting answers from big-data analytics is no more complicated than asking Siri a question on your iPhone.
Take the issue of student retention. Research indicates that a student’s decision whether or not to persist can be influenced by numerous factors, including pre-college academic preparation, time-management skills, or the feeling that he or she doesn’t belong on the campus. An analysis of a single student cohort might reveal several statistically significant persistence predictors, some of which represent pre-college demographics (such as race, first-generation status, or socioeconomic status), while others denote first-year experiences (such as peer relationships, academic support, or sense of belonging).
ADVERTISEMENT
But if the data set isn’t robust enough to determine which of these variables is more influential, or if a combination of pre-college characteristics and college experiences produces an additional effect above and beyond the effects produced by those two factors individually, then this institution is just as likely to make an expensive mistake or to see improvement merely by chance as it is to stumble upon a change that works. To make matters more difficult, institutions can tackle only those problems for which they have data. While pre-college preparation or course-grade data might be readily available, data on time management or a student’s sense of belonging may not. All of the analytic firepower in the world can’t make up for data that you don’t have.
Equally troubling, blind faith in big-data analytics can devolve into a belief that quantitative methods are the only way to investigate a problem. This misstep would be especially troublesome for smaller colleges where the numbers of students from certain populations (such as underrepresented or first-generation students) are often small enough to require three to five years of data collection before any statistical analysis can be conducted. Yet these are the students who are often the most at risk and in need of immediate support. In such cases, focus-group findings can provide useful insights that can be swiftly applied, helping institutions sustain the momentum required to achieve improvement. For example, improving advising practices for first-year students can be as simple as asking them what has worked best and then plugging in their responses.
Sometimes qualitative research methods are the only plausible way to fully grasp the obstacles that hinder student success. Instead of doubling down on big-data analytics, colleges would be better off developing an array of research skills with the ability to discern which tool to use when.
Certainly, colleges need to improve educational effectiveness and efficiency. But change on a college campus is a process, not an event. In order to improve outcomes like student learning, retention, or completion, institutions must commit to a four-stage process:
Identify an end result or experience that can be improved.
Design a plausible change grounded in evidence.
Put that change into effect responsibly.
Assess its impact and adapt as necessary.
Consistently achieving positive results requires a carefully balanced investment of people, time, and resources across all four stages of this process. Administrators and faculty members can tell countless stories of campus initiatives that failed because they botched the design or bungled the implementation. Quantitative analytical skill is a useful tool for identifying problems or assessing the impact of a recently adopted change, but it cannot design an intervention, put a new program in place, or navigate the interpersonal nuances of helping colleagues adjust to change. Without committing to the entire process of improvement, even colleges with the best suite of analytic tools will find real improvement a long way off.
ADVERTISEMENT
In addition, most ideas designed to increased efficiency or effectiveness in the corporate world don’t fit nicely into a higher-education context. Big-data analytics emerged in a corporate environment where adopting new ideas required only a mandate from the top brass in the corner office. But in higher education, where the organizational culture prioritizes academic freedom and defends its autonomous clusters, the most effective mechanism for bringing about real change comes from the bottom up. Again, this doesn’t mean that we should throw out the big-data babies with the analytical bathwater. But institutional investments in big-data analytics can’t make these new tools appear to be the playthings of the senior administration. If the goal is improved educational effectiveness, and if real change comes from the ground floor, then the tools must be easily accessible to the faculty and staff members who are best situated to design and bring about change.
All of this suggests that maybe colleges should listen more skeptically to the sales pitch of big data and instead focus on aligning their investment in improvement with the process through which real change actually occurs. This might start by asking if your institution collects the information most germane to improving student success. With the National Survey of Student Engagement, Gallup, and the decades of research on college-student outcomes, resources that spell out what data to collect and how to collect it are not hard to find. Furthermore, with careful collaboration and planning, colleges can gather this information entirely in-house at almost no cost. Data-management and data-analytics software are typically already available on campus. And learning how to use this software, as well as how to marry data sets together in order to analyze them more expansively, is often only a YouTube tutorial away.
With tools like these, the bulk of the institutional investment in improvement can be redeployed to tackle a wider range of challenges and empower the people on a campus who can make real change happen.
Mark Salisbury is assistant dean and director of institutional research and assessment at Augustana College, in Illinois.