The Association of American Colleges and Universities, of which I am president, has actively supported efforts to clarify the most important goals for learning in college. Our LEAP initiative (Liberal Education and America’s Promise) has articulated the aims and learning outcomes of a 21st-century liberal education, and we helped frame the Lumina Foundation’s competency-based Degree Qualifications Profile, now being “beta-tested” by nearly 300 colleges. But not everything labeled “competency” comes even close to meeting the competencies described by these two initiatives. Narrow training programs do not meet the test; they almost always shortchange both broad learning and civic learning, focusing only on job skills and not on the knowledge and evidence-based reasoning capacities that graduates need for life and citizenship as well as their careers.
We also need to look with a high degree of skepticism at so-called innovative programs that claim the language of “competency” but in fact have simply slapped new labels on certifiably underperforming practices borrowed from traditional higher education. For example, many “innovative programs” employ what we might call the “Once and Done” approach to competency. The student passes a course that meets a competency requirement—for example, writing or quantitative reasoning. The competency is checked off the list. But is the student demonstrably competent? We have only to read national studies on traditional college seniors’ weak writing and math skills to see how poorly this check-off strategy actually works.
Writing, quantitative reasoning, analytic inquiry, engaging diverse perspectives, and all the other intellectual skills that LEAP and Lumina’s degree profile recommend must be practiced frequently, across the entire educational experience and beyond. With very rare exceptions, students cannot expect to develop and consolidate a complex competency in a single course.
Another faux approach to competency is what we might call the “Coverage Is Enough” strategy, in which a student reads course materials and then passes one or two examinations, often consisting of short-answer or multiple-choice questions. Borrowing this weak practice from traditional courses, many “innovative” programs now tout course-based test scores as evidence that the students are “competent.” But there is a huge disconnect between this approach to teaching and the complex competencies that educators value and the economy rewards. Bloom’s widely used taxonomy of intellectual skills highlights this disconnect by distinguishing “knowledge” and “comprehension” from capacities such as “analysis,” “synthesis,” and “evaluation.” Too many of the tests that students take in traditional and innovative programs alike probe only knowledge and comprehension. But pretending that “comprehension” is equivalent to “competency” will do nothing to help students develop the complex proficiencies in critical inquiry, evidence-based reasoning, adaptive learning, and problem solving that a good education should foster.
The fact is that students learn what they practice. If competency is the goal, then students’ own frequent and effortful work on projects, papers, research, creative tasks, and field-based assignments is the key. As innovators claim breakthrough practices, we need to evaluate their innovations against the evidence of competency that is—or is not—transparently demonstrated in students’ own portfolios of educational accomplishments.