The handyman has a tool for everything, but the admissions dean is not so lucky: He must make do with just a few.
Every year, presidents and professors expect freshmen who are curious, determined, and hungry for challenges. The traditional metrics of merit, however, can’t reveal such qualities. Standardized-test scores may or may not predict a given student’s long-term potential. Grade-point averages present only a partial view of an applicant’s talents and work habits. And so, some admissions officers say, it’s time for a new set of tools.
Over the last decade, a handful of colleges have designed “noncognitive” assessments to measure attributes—like leadership and the ability to meet goals—that content-based tests do not. Succeeding in college often requires initiative and persistence, or what some researchers call “grit.” Noncognitive measures are an attempt to gauge such qualities. If the SAT asks what a student has learned, these assessments try to get at how she learned it.
Long an afterthought in academe, alternative indicators of student potential have captured the interest of instructors, testing companies, and enrollment chiefs. As science unspools the secrets of how we learn, it inspires new approaches to assessment. The way most colleges have long evaluated applicants reflects beliefs about what counts most. If those beliefs evolve, it follows, so, too, should the admissions process.
Imagining a new system, however, is easier than building one. What should the 21st-century college consider? How much can noncognitive assessments—typically in the form of self-evaluations and short essays—really tell a college? And are they reliable?
Admissions officials plan to weigh those questions this week at a national conference sponsored by the University of Southern California’s Center for Enrollment Research, Policy, and Practice. The conference, “Attributes That Matter: Beyond the Usual in College Admission and Success,” will include experts in noncognitive aspects of learning, which represent the next frontier in holistic admissions.
Jerome A. Lucido, the center’s executive director, predicts that new measures of student potential will eventually become fixtures in higher education, allowing admissions officers to conduct more-robust reviews of applicants, while giving colleges valuable data on those who enroll.
“We don’t do enough work to understand why one student with a 3.5 GPA was successful and another one wasn’t,” he says. “We’ve ignored this realm because it was more difficult, less understood. Now we’re at a point where noncognitive measures can take their place alongside other things.”
What’s in a Name?
For the last century, cognitive measures have ruled the educational roost. College-entrance tests were established as “hard” measures of knowledge and ability.
As David T. Conley explains in a forthcoming commentary piece in Education Week, researchers and psychometricians once paid relatively little attention to aspects of learning deemed noncognitive, the default term for “everything that was not grounded in or directly derived from rational thought.” So educators saw a hierarchy, with cognitive skills on top and noncognitive attributes at the bottom.
Mr. Conley, a professor of education at the University of Oregon, is one of several researchers who hope to change that perception. The relationship between the what and how of learning, he argues, is less hierarchical and more symbiotic. Sure, he says, students use their brains when they recall how to solve a mathematics problem. Just as they did to achieve the difficult and frustrating task of learning the math formula in the first place.
“It’s time to think about noncognitive dimensions of learning as forms of thinking in and of themselves,” Mr. Conley writes.
To that end, he proposes replacing “noncognitive” with the term “metacognitive learning skills.” A name change, he argues, could help legitimize the development of new assessments.
Neuroscience supports the idea that so-called cognitive and noncognitive attributes are, in fact, interwoven. The age-old distinction between mind and heart, brain and body, much research suggests, may not be a useful or even accurate way to think of ourselves. Language, reasoning, and other high-level cognitive skills taught in schools “do not function as rational, disembodied systems, somehow influenced by but detached from emotion and the body.”
Those words appeared in a paper published in 2007 by the journal Mind, Brain, and Education and co-written by Mary Helen Immordino-Yang, an assistant professor of education at Southern California who is scheduled to speak at the conference. A neuroscientist and human-development psychologist, she has studied the neural roots of learning, creativity, and morality.
In the paper, “We Feel, Therefore We Learn: the Relevance of Affective and Social Neuroscience to Education,” Ms. Immordino-Yang and Antonio Damasio, a professor of neuroscience at the university, boil a complex discussion down to a simple conclusion: Logical-reasoning skills and factual knowledge are only so valuable on their own. Students also need an “emotional rudder"—an ability to transfer skills and knowledge to real-world situations—to succeed. “Simply having the knowledge,” they wrote, “does not imply that a student will be able to use it advantageously.”
If that’s true, then colleges annually accrue an abundance of input variables that may have little bearing on the long-term outcomes their marketing materials and mission statements so often describe.
‘Diamond in the Rough’
The notion that test scores and GPAs tell too little of an applicant’s tale has long worried admissions officers. Even those who groan at reading a zillion personal statements and letters of recommendation insist that such documents can provide helpful insights, a glimpse behind all those numbers.
Although noncognitive assessments are supposed to do the same, there’s no consensus on how best to get at students’ intangible qualities. With no gold standard, researchers are dabbling in an array of approaches. The College Board has tested a standardized way to measure 12 qualities, such as artistic and cultural appreciation, and integrity. The Educational Testing Service has created the Personal Potential Index, an online system allowing evaluators to rate applicants in six categories, including communication skills and teamwork. A means of standardizing letters of recommendation, the index has caught on at some graduate schools and may have a future in undergraduate admissions.
For now, most noncognitive assessments are homegrown experiments, exciting yet challenging. Just ask Noah Buckley, director of admissions at Oregon State University.
In 2004 the university added to its application the Insight Résumé, six short-answer questions based on the research of William E. Sedlacek, a professor emeritus of education at the University of Maryland at College Park and pioneer of noncognitive assessment. One prompt asks applicants to describe how they overcame a challenge; another, to explain how they’ve developed knowledge in a given field.
The answers, scored on a 1-to-3 scale, inform admissions decisions in borderline cases, of applicants with less than a 3.0 GPA. “This gives us a way to say, ‘Hey, this is a diamond in the rough,’” Mr. Buckley says. For students with GPAs of 3.75 or higher, the scores help determine scholarship eligibility.
The Insight Résumé is a work in progress, Mr. Buckley says. Reading 17,000 sets of essays requires a lot of time and training. Meanwhile, he believes the addition has helped Oregon State attract more-diverse applicants, but it’s hard to know for sure. A recent analysis found that although the scores positively correlated with retention and graduation rates, they did not offer “substantive improvements in predictions” of students’ success relative to other factors, especially high-school GPAs.
The university is now considering other ways to deploy the data. “There’s more we can do with this tool, which gives us rich information,” Mr. Buckley says. “To serve students and match them to services, we have to get out of the mentality that this is something we use only for admissions.”
Elsewhere, proponents of noncognitive assessments say such tools will become more necessary as applicant pools grow more diverse: Many underrepresented minority students struggle on the SAT but excel in other ways.
“This gets us out of the habit of talking about students as a 3.8, 29 ACT,” says Jon Boeckenstedt, associate vice president for enrollment at DePaul University, which now lets applicants write short responses to four essay questions, also based on Mr. Sedlacek’s research, in lieu of submitting test scores. “If nothing else,” Mr. Boeckenstedt says, “this allows us to think of students as multidimensional.”
Although only 5 percent of this fall’s incoming class completed the essays, Mr. Boeckenstedt believes the option sends an important message to students.
“So many places miss out on good kids, and, in turn, so many good kids rule themselves out, based on test scores alone,” he says. “We have to break out of the traditional way of evaluating what makes someone capable or smart or talented. Universities are supposed to evolve.”