I continue to see merit in this approach. All the current talk about alternatives to college and the potential infusion of billions of federal work-force-training dollars creates even more urgency to see which programs are worth attending. An effective system for that, Sharp told me, could help “get people away from dead-end jobs.” Need another reason? Consider that an organization called Credential Engine continues to develop a directory of educational credentials that now numbers nearly one million but provides little information to help people assess which programs are valuable and which are duds.
Sharp says that ultimately EQOS hopes to let colleges and other providers use tools that make results visible to prospective students — like “the educational equivalent of Energy Star,” she says. That point feels a long way off, though, even if Sharp says EQOS hopes to develop the institutional tools in 2022.
The EQOS model calls for measuring programs, academic and otherwise, on five criteria: learning, completion, placement, earnings, and student satisfaction. Each would be measured by an established standard that could be audited or verified by an outside party. Unlike accreditation, the process isn’t meant to assess an institution’s quality, but rather, to evaluate programs and, as Sharp puts it, their “impact on a student’s life.”
I was curious to understand what obstacles stand in the way. Here’s some of what Sharp told me.
Finding the right data is hard. Heck, even deciding what data to collect — on, say, earnings — can be a complex calculation. Do you measure participants’ salary changes one year after they completed a program? Three? What happens when that information isn’t readily available? EQOS has been working on evaluations of job-training programs in Colorado and Indiana, for example, two states known for having rich salary data. But even there, the records aren’t complete. So often “you’re cobbling together sources,” says Sharp. It would make sense, she says, to collect outcomes data from institutions offering the programs, but their systems “were never set up that way.” (Sound familiar? In the spring, the Project on Workforce at Harvard University also found scant evidence of outcomes assessment among 300-plus training programs it studied.)
Showing what students learned isn’t any easier. While assessing “satisfaction” is doable — EQOS often relies on the Net Promoter Score for that — measuring the skills and knowledge students gained (or didn’t) “is its own beast,” Sharp told me. The EQOS philosophy calls for institutions themselves to identify their programs’ educational objectives, the criteria for success, and how to verify that those criteria have been met. That allows for flexibility and institutional autonomy, but it seems to make it hard to routinize the evaluation.
Nothing compels institutions to join this voluntary effort. The institutions EQOS has worked with (including a Denver coding boot camp I once profiled) believe in highlighting their outcomes, Sharp says, and “want to have a way to do it systematically.” But unlike accreditation, which is required for participating in federal student-aid programs, this process doesn’t have such a lever (although Sharp has argued for one).
The model doesn’t seem as applicable to more-traditional college programs. I’m eager to see results from an EQOS pilot begun in August with seven two-year and four-year colleges in New Jersey. State officials there told me this week that they’re using the project to hone their own evaluations. “We’ll take what works and leave what doesn’t,” said Brian Bridges, the secretary of higher education. The state is especially interested in measuring how educational programs affect social mobility.
The New Jersey experiment could also answer some of my questions about whether colleges will consider evaluations like this too burdensome to bother with. Bridges, despite being a data guy himself, says he’s mindful of that. The state won’t adopt new standards, he said, just “for the sake of adding another accountability metric.”
Officials there hope to report out results after the project concludes, in February. More colleges wanted to join the pilot, said Annie Khoa, a senior adviser to Bridges, but EQOS didn’t have the capacity. “We were very surprised,” she said, “by how much interest this got.”
Recommended reading.
Here are some education stories from other outlets that recently caught my eye. Did I miss a good one? Let me know.
- Rural schools have their challenges, but they can also be “sites of learning, community, and excellence,” two education scholars write in The Daily Yonder. That’s often overlooked, with “tragic consequences.”
- “In an era of viral digital disinformation, eroding governance norms, and increased political violence, the same old campus ‘civic engagement’ programs no longer seem sufficient,” EdSurge reports. Now colleges are rethinking their efforts.
- A new study shows that the lack of internet access in the United States is stark in rural Southern regions with higher Black populations, and as Thomson Reuters Foundation reports, experts say “that dynamic amplifies existing ‘structural racism.’”
Correction. Last week, in writing about Brandman University becoming UMass Global, I incorrectly described the role of a new online-leadership group at the University of Massachusetts system. It won’t be overseeing the rollout of UMass Global, but does meet regularly to discuss ways of collaborating with that institution on matters like transfer agreements, complementary academic programs, and marketing plans, and on advancing online education across the entire system.
Got a tip you’d like to share or a question you’d like me to answer? Let me know, at goldie@chronicle.com. If you have been forwarded this newsletter and would like to see past issues, find them here. To receive your own copy, free, register here. If you want to follow me on Twitter, @GoldieStandard is my handle.