Personalized learning! Adaptive learning! Brain science! Learning science! Big data! New and improved! The marketing for "personalized" educational products can feel a little like a late-night infomercial. Rather than getting common-sense explanations of how the products work or being provided with peer-reviewed research to justify ambitious (if vague) claims, we are simply reassured that a product works because it is "based on the science of neuroplasticity."
About This Series
This commentary is part of a series by the authors of the ed-tech blog e-Literate, Michael Feldstein and Phil Hill.
In other words, there are plenty of good reasons to be suspicious of the marketing claims even when you trust the company’s leadership completely. And how often do you trust your vendor’s leadership completely? In the worst of cases, I’ve gone as far as calling some marketing claims "snake oil."
Consider the following scenario. Ask yourself after each sentence how much the new details change how you would think about the claim:
- The designers of a certain educational software product claim that when students use their product for several semesters, their graduation rates rise by more than 20 percent.
- Those designers present a paper making this claim at the leading academic conference of learning-analytics researchers.
- The paper they present has not been peer reviewed.
- The designers repeat their claims at a number of ed-tech conferences and are quoted in flattering articles that appear in the education press.
- Several critics of the graduation-rate claim emerge, raising the question of whether the results might actually be a statistical error due to poor experimental design.
- One of those critics produces a simulation showing that the same results claimed by the product could be duplicated by feeding students chocolate in several successive semesters.
- Members of the learning-analytics research community think the simulation is plausible and call for the designers of the product to release more information about their study.
- The designers of the product do not provide the additional information, and their employer chooses not to comment on the critique.
As you might have guessed, these statements are not hypothetical but historical facts. The product in question is called Course Signals, and the designers were staff members working for Purdue University at the time of the controversy. The author of the candy retention simulation is Alfred Essa, who now works at McGraw-Hill Education but whom I met when he was CIO at MIT’s Sloan School of Business. The root of the problem is not raw greed but a failure of the scientific peer-review process, amplified by the echo chamber of conventional and social media.
I can’t even say that amplification was caused by "marketing," at least in the conventional sense. The university was spreading the word among its peers about the findings of its institutional research. An earlier study of Course Signals conducted by the same research group, which showed that Course Signals helps students succeed within individual courses, was well received by the academic research community and has not been questioned. The researchers, one of whom I know fairly well, have good reputations. They probably just made a mistake in the experimental design of their second study. It happens. As now-former staff members, they are not at liberty to comment about institutional research. I honestly don’t know why Purdue itself failed to respond to its critics.
What I do know is that none of this would have happened had there been robust peer review of Purdue’s work in the first place. For a lot of reasons, related both to the intrinsic nature of the work and to the organizational challenges of very different academic disciplines still learning how best to collaborate with each other, applied-learning-sciences research claims are actually very difficult to evaluate. This is the core problem that needs to be solved if we want to improve both the effectiveness of educational technology and our ability to evaluate particular effectiveness claims.
But academic suspicion of commercial interests is so strong that narratives of corporate greed routinely eclipse more-nuanced stories that include problems with the ways that academe conducts its own business. For example, a few weeks ago I wrote a column here arguing that colleges and universities should consider empowering students to make their own decisions about their data privacy rather than having the institution always make those decisions on their behalf. Tracy Mitrano, academic dean of cybersecurity certificate programs at the University of Massachusetts, wrote a letter to the editor in response, claiming I was using rhetorical "sleight of hand" to somehow justify handing student data over to vendors.
But privacy policies don’t always work to thwart commercial interests and help students. In fact, in the case of educational research, the opposite is true. Textbook and educational-software vendors do not need any changes in regulations or contracts to conduct large-scale research regarding the effectiveness of their products across multiple institutions. In contrast, if university researchers wanted access to the same kind of data, they would have to seek permission from not just their own university’s institutional review board, but from every IRB from every participating institution. And those boards do not all follow the same review policies.
In this case, existing student privacy policies favor the vendors. IRBs exist for very good reasons, but it is worth asking whether it makes sense to adopt cross-institutional standards that are designed to facilitate responsible academic educational research.
The larger point is that reflexive anticommercialism is an easy answer that can mask deeper problems. In order to get more trustworthy claims of real gains in educational outcomes, the academy needs to both pursue the highest standards of academic research and simultaneously take a long, hard look at the institutional processes that make that pursuit unnecessarily difficult.
As part of that, if you want better learning science, then don’t push vendors out of academe. Instead, push them in. Make them show their work. Make them subject their research to peer review. Vendors should be accountable for their claims. As should universities. The goal for us all should be improving the lives of students, and the main tool for learning how to better achieve that goal should be rigorous research conducted in the context of a strong environment of collegial peer review.
Michael Feldstein is a partner at MindWires Consulting, co-publisher of the e-Literate blog, and co-producer of e-Literate TV.
Join the conversation about this article on the Re:Learning Facebook page.