Have you ever wondered why learning-management systems, which just about everyone on campus uses every day to keep classes running, seem destined to disappoint, year after year? I can tell you why. It’s because of a dirty word that academics don’t like to talk about: procurement.
University procurement processes are painful and bureaucratic, particularly at public institutions, where state laws and regulations come into play. On top of that, it’s hard to buy a product that will satisfy the needs of an entire community, especially for complex processes like teaching and learning.
Learning-management systems, like any product, evolve because of a kind of natural selection — or unnatural selection, in this case. Their makers must sell to survive, in an environment that resembles the Galápagos Islands from hell: Competition forces the companies to produce features that colleges say they need, whether or not the colleges actually need them. Unfortunately, the procurement process all too often results in a long list of requirements that bear little resemblance to broad campus needs.
In a typical LMS selection, the person managing the process will start by convening a selection committee and gathering input. The initial feedback from the committee looks something like this:
- Professor John proclaims that he spent the last five years figuring out how to get his Blackboard course the way he likes it. He is not moving to another LMS unless it works exactly the same as Blackboard.
- Professor Jane says that she hates Blackboard, would never use it, and runs her own Moodle installation off of her computer at home. She will not move to another LMS unless it works exactly the same as Moodle.
- Professor Pat doesn’t have strong opinions about any one system over the others. But there are three features in Canvas that must be in whatever platform they choose.
The committee then declares that the winning product must work exactly like Blackboard and exactly like Moodle, while having all the features of Canvas. Oh, and it must be “innovative” and “next generation” too, because we’re sick of learning-management systems that all look and work the same.
The manager then starts writing a request for proposals. This process often starts with an email asking other campus-technology officers to share their own versions. After getting three or four of them, sometimes from wildly different types of colleges with wildly different needs, the manager may start cutting and pasting together bits from the various documents. There is a lot of pasting, but not so much cutting. Lists of requirements are added together, and then added to based on whatever the selection committee comes up with.
Case in point: One recent RFP from a large public-college system included “integration with Second Life” as a requirement. If you are not up on your educational technology, this is a little bit like requiring integration with Myspace or Napster.
None of this happens because the people involved with the selection are lazy. To the contrary, the entire selection process is time-consuming and thankless. The real problem is that it is difficult to gather real teaching and learning requirements, and the people in charge of doing it on campus have neither the time, nor the training, nor the resources to do the job properly.
So they do the best that they can under the circumstances. They draw on the personal preferences of those who are willing to participate. They ask peers at other institutions what they did. The process often results in a set of requirements that have no real connection to the actual teaching and learning needs on campus.
The makers of learning-management systems generally know this, but they are captive to the process. There is a strong incentive for them to say they can meet colleges’ needs in order to make sales.
For example, they know that if they can’t check the “integrates with Second Life” box, then they meet one less requirement than their competitors do, and therefore won’t make it into the second round of the evaluation. The sales rep creates word salad to suggest that the product somehow connects to Second Life.
And who knows, maybe the School of Engineering made a big investment in Second Life simulations. It would be weird, but it does happen. So the company plays along and gives answers that probably aren’t helpful for making a good decision.
What’s worse, the companies usually don’t know why they won or lost. If there is any public justification given, that justification often doesn’t reflect the committee’s real calculations or the limitations of its evaluation. But the vendors want to win more sales. They have to try their best to make sense of the data that they have. Because their data is bad, they often come to the wrong conclusions about what customers actually need. Garbage in, garbage out.
The conversation inside the company goes something like this:
Sales Manager: Second Life integration has come up in five of the last eight RFPs. It’s a growing need. And we don’t have a good solution for it.
Product Manager: None of our existing customers have complained about poor Second Life integration. Ever. I mean, seriously? What they’re complaining about now is managing large classes in the gradebook. That’s the highest priority for the next release.
Sales Manager: That has never come up in an RFP or sales process. They don’t care about gradebook features. They care about Second Life.
Product Manager: They don’t care until they actually try to use the gradebook.
Executive: Can we do both of these things?
Product Manager: Not without increasing the risk of introducing bugs into the grading engine.
Sales Manager: I can’t promise that I’ll make my numbers without better Second Life integration. It’s killing us.
Executive: OK, let’s do the least costly Second Life integration we can manage and try to get that gradebook fix in as well.
The products that are available to us are heavily shaped by how we select them. Of course, this problem isn’t limited to just learning-management systems, but our choices in this area carry big consequences. A bad selection process means a bad product to live with now and a bad set of options to choose from down the road. Open-source offerings don’t really solve this problem either, since they mostly have to go through the same selection process that proprietary ones do.
If we want better choices, then we need to get better at choosing.
There are efforts here and there to improve the situation. For example, The Chronicle recently reported on the University of North Carolina system’s effort to create a Yelp-like service for rating educational technology products. That’s a decent idea that may be especially helpful for products that are selected by individual faculty members.
But for shared services, colleges and universities need to get better at identifying the range of needs across the many classes and disciplines they serve. There is no easy answer to this problem. Nobody on campus is typically tasked with this responsibility, or trained for it, or given time and resources to do it properly.
Higher education needs to get better at academic needs assessment. That requires an entirely different and deeper set of questions than which features are important to put on a checklist. It requires an in-depth exploration of how teaching and learning happens in various corners of the campus community and what capabilities would be most helpful to support those efforts. That’s generally not work that can be carried out by a small, part-time committee on an ad hoc basis a few months before a product is selected.
Michael Feldstein is a partner at MindWires Consulting, co-publisher of the e-Literate blog, and co-producer of e-Literate TV.
Join the conversation about this article on the Re:Learning Facebook page.