When too many colleges have low rates of graduation and high rates of student-loan default, you would expect the U.S. Department of Education to take bold action. But it came as a surprise recently when the department sent a letter to leaders of regional accrediting agencies asking them to shift from evidence-based institutional oversight to more like a data-collection service.
The letter offers guidance on a series of executive actions the department announced in November to “move toward a new focus on student outcomes and transparency.”
Accreditors at all levels of education share the goal of using data and other evidence that shed light on factors that inhibit quality and undermine student success. The new focus, however, crosses the line between what accreditors do and what government seeks to accomplish, and requires us — leaders of accrediting groups responsible for oversight of schools and colleges in dozens of states — to speak with one voice about our concerns.
The department’s determination to have accreditors give greater weight to bright-line indicators is disturbing.
The Department of Education’s letter urges accreditors to go beyond their work of providing qualitative assessments of every aspect of an institution to tilt the focus toward a few narrow measures of performance using uniform metrics, or else risk being shut down.
To the department’s credit, its request for more data comes with a promise of allowing greater flexibility in how accrediting agencies choose to scrutinize performance. Institutions and programs with solid track records do not need review with the same frequency and depth of assessment, allowing the agencies to home in on struggling institutions.
But the department’s determination to have accreditors give greater weight to bright-line indicators — rates of retention, graduation, job placement, student-loan repayment and defaults — is disturbing. There are differences between the data we collect to assess quality, the data the department requires to enforce financial-aid and regulatory compliance, and the data legislators seek to develop policy. This new guidance “encourages” accrediting agencies to collect data for purposes that are clearly outside of their missions.
As we’ve seen with the department’s heralded College Scorecard initiative, data dumps and rating systems lack any degree of nuance and force institutions to focus more on outcomes — some of which they have no control over — rather than explore the myriad underlying causes of low performance in an effort to map a path toward improvement.
Accreditation can reveal useful information about why students aren’t graduating; how, why, and when they fail; and how to make adjustments in teaching and learning, course sequencing, and other factors. But reporting on only a few outcomes provides no such useful data.
Nor do simple bright-line measures tell the college-going public about the experience of attending an institution. They merely provide information to the U.S. Department of Education that can help it determine how to better administer federal financial-aid programs. That purpose was the intent of the scorecard, a more appropriate place for such an effort (although it was not welcomed by colleges).
Moreover, putting too much weight on a few metrics will not improve results. Fourteen years of the federal No Child Left Behind law have caused the nation’s public schools to focus their improvement efforts on a few narrow measures but have led to no better outcomes and a host of unintended consequences, including overuse of testing, skewing of curricula, demoralization of educators, and rampant cheating and efforts to game the system.
The Department of Education’s letter should raise red flags for colleges nationwide. That is because:
- Striving for common rate thresholds for outcomes could cause colleges to limit the access of underserved populations. Applying the same metric to all colleges could also lead the government to shut down or withhold resources from some institutions, such as historically black colleges and universities, Hispanic-serving colleges, and tribal colleges, serving some of the least advantaged students. And what about community colleges grappling with returning adult students who may never have envisioned themselves in college or who need help reacquiring learning skills? We need these institutions to train both the entry-level and transitioning work force and not be judged solely by an indicator of their graduation rates.
- Student-loan repayments and defaults and job placements are important outcomes of college but are often beyond an institution’s control. They more often reflect economic conditions and employment trends than what colleges do to prepare people with degrees that have value in the real world.
- The proposed shift would provide impetus for institutions to manipulate data and change admissions or grading policies to produce higher graduation rates. Such gamesmanship would actually limit educational opportunity and lead to inadequate academic and career preparation.
Setting standards and evaluating their use on campus, engaging institutions in the reflective process of self-study, and using expert and peer review to promote continuous improvement are activities that accrediting agencies have been conducting and refining for more than 100 years. This self-regulation and respect for the uniqueness of institutions is a reason that American higher education continues to be the best, most diverse system in the world.
Equally important, holding accreditors accountable for data collection raises a host of questions: Who is the information for? How reliable is it? How will it be used? What are the consequences for colleges? Do the data help advance improvement?
There are other problems with the bright-line measures, most notably the limitations of the information itself. For example, the Department of Education relies upon its Integrated Postsecondary Education Data System, which provides information about some of what we need to know, but not so much about the majority of those attending college, who don’t fit the definition of “traditional student.” Ipeds has looked only at first-time, full-time students who enroll in an institution in the fall term and receive a degree from that same institution; they now amount to fewer than half of all college students.
This year, Ipeds has begun asking colleges to report data on part-time and non-first-time students, which will address some limitations. But the department still has not taken on key issues. For example, how should colleges account for students who complete a credential elsewhere? This requires access to individual student data, like those collected by the nonprofit National Student Clearinghouse (on whose board one of us serves).
Today’s students are young and not so young, attending part time, stepping in and out, and transferring in state and out of state. The clearinghouse provides a more complete demographic picture, one that shows the complications of reducing student behaviors to a simple graduation rate.
The proposal for accreditors to assess institutional compliance with federal data requests also requires greater definition about what we mean by “completion,” “student achievement,” and other outcomes within the contexts of our diverse institutions. We need to clarify the roles and responsibilities of the oversight triad — federal government, states, and accreditors — and ensure that neither states nor the federal government asks accreditors to perform roles that more appropriately belong to government.
For accreditation to help improve quality at the institutional level, accrediting teams and colleges rely on reams of data appropriately collected and applied. The data that inform federal policy are not the same as those collected to guide institutional performance. We need to resist efforts to redefine the purpose of accreditation and the missions of our institutions in misguided ways.