There are no scales for weighing character, no calculators for computing grit. One cannot affix a decimal point to leadership skills.
Nonetheless, thousands of college students who receive private scholarships each year are rewarded for such qualities. Most scholarship providers, especially those that serve low-income and first-generation students, have long looked beyond grades and test scores. Now, many organizations are developing—and refining—assessments of “noncognitive” attributes.
Their approaches could provide useful models for admissions officials seeking new ways of gauging applicants’ potential, according to Carrie Besnette Hauser, a senior fellow at the Ewing Marion Kauffman Foundation. Ms. Hauser spoke last week at a conference hosted by the University of Southern California’s Center for Enrollment Research, Policy, and Practice. The hot topics here were noncognitive assessments and definitions of merit.
Ms. Hauser is a past president and chief executive officer of Kauffman Scholars, a program designed to help urban students in Kansas and Missouri prepare for college. The program identifies noncognitive attributes that might help or hurt students along the way.
Ms. Hauser described how several prominent scholarship organizations assess students’ strengths and weaknesses. Many have adopted interview and essay questions designed to measure noncognitive abilities, and some use diagnostic tests to select students and keep them on track in college.
The Daniels Fund Scholarship Program rewards students who demonstrate “moxie,” character, and well-roundedness, among other qualities. The program, she said, is developing an algorithm to screen applicants based on their noncognitive attributes.
Dell Scholars is among the first scholarship providers to test for emotional intelligence, or EQ. The program also uses a diagnostic test called the “student risk indicator,” given to scholarship recipients the summer before they start college. The results are used to determine students’ needs, and create early-intervention plans.
The Coca-Cola Scholars Foundation also uses an emotional-intelligence test—after the selection process. Students who receive scholarships take the EQ-1 2.0, an assessment with 133 questions. The foundation uses the results to help students as they consider careers, plan for interviews, and evaluate their emotional strengths and weaknesses.
Recently, the Horatio Alger Association, which supports low-income students, commissioned a study of the causes and effects of resilience. The study, conducted by researchers at the University of Chicago, examined the attitudes and behaviors of students who had received the scholarship. It identified specific characteristics of highly resilient students, such as their tendency to “reframe” adversity as a positive challenge instead of a crippling defeat. The results will inform the selection of future scholarship recipients, according to Ms. Hauser.
Programs like those are among the testing grounds of noncognitive assessments, and the lessons learned could help colleges, Ms. Hauser believes. “These are incredible incubators, absolutely open ground for research,” she said. “These programs collect data, collect data, collect data.”
The Gates Millennium Scholars Program has plenty. Since 2000, the program has given more than $763-million in scholarships to minority college students. Noncognitive ratings make up roughly four-fifths of the weight in evaluations.
Each applicant must write short essay responses to eight questions, each relating to a variable, including “self concept,” the ability to meet long-term goals, and community engagement. The scoring of those responses has become a carefully monitored science, as Larry Griffith, a vice president at the United Negro College Fund and head of the scholarship program, explained last week.
Noncognitive assessments, he believes, are especially crucial for assessing underrepresented students, because grades and test scores alone may skew views of their potential. “What noncognitive lets you do,” he said, “is drill down ... in a very defined way.”
‘Not Just What You Did, but Why’
Charles E. Lovelace Jr. has pondered the meaning of merit. Over time, his understanding of the word’s meaning has changed.
Mr. Lovelace is executive director of the Morehead-Cain Foundation, which provides full-ride scholarships to students at the University of North Carolina at Chapel Hill. A decade ago, he and his colleagues took a close look at their program. What they saw concerned them. Although recipients of the scholarship were earning good grades, many of them weren’t highly engaged on the campus.
So they took a closer look at the information collected on the application. “Leadership” is one of the four traditional selection criteria for the Morehead-Cain Foundation, and the application at the time asked applicants to note leadership positions they had held. Students were able to list a slew of activities, but they weren’t able to describe what they had done.
Mr. Lovelace and his colleagues also went back and “graded” about 350 scholarship recipients at the point of graduation. They rated each student on a 1-to-4 scale, from “unengaged” to “added clear distinction to the university.” When they compared those scores to the information used to select applicants, they found no correlation to SAT scores, or to the high-school activities and leadership positions students had held. Dazzling credentials didn’t guarantee a dynamic leader.
“That was a real wake-up call that really transformed how we think about this,” Mr. Lovelace said.
The application now limits the number of activities students may list, and provides more room for descriptions. “We wanted more stories and examples,” Mr. Lovelace said. “Not just what you did, but why, how, and what was the outcome?”
Based on the characteristics of students with the highest engagement scores, officials developed secondary selection criteria, including “commitment,” “empathy,” and “spark,” as measures of a student’s energy, self-confidence, and communication skills.
For the initial screening, the program now uses professional readers. After the first round of evaluations, applicants’ test scores and grades are not considered, allowing the selection committee to base final decisions on other factors, Mr. Lovelace said. The program also adopted a behavioral-based interviewing process, discouraging questions about current events. Interviewers are trained to look for specific characteristics.
Those changes, Mr. Lovelace believes, have helped broaden the diversity of the program’s applicants—and the caliber of its recipients. In this exercise, he sees a message for college admissions. The next challenge for the profession, he said, is “how we separate merit from privilege, because so much of what happens to these kids is a result of their privilege rather than their achievement and their motivation.”