One purpose of a college education is to create active and engaged citizens and future leaders. If we want students who have the potential to become such people, when we evaluate applicants we need to keep in mind the skills involved in good citizenship and positive leadership.
Many of the leaders who have gotten the world into its current messes did well on admissions tests and attended prestigious colleges and universities. Jeffrey Skilling, a former CEO of Enron, graduated from Harvard University’s business school, and Andrew Fastow, who was Enron’s CFO, graduated from Tufts University. John Rigas, the founder of Adelphia Communications, who was convicted of fraud in the company’s demise, received a bachelor’s degree from Rensselaer Polytechnic Institute.
The problem is not that those leaders aren’t smart; they are. The problem is that they lack — and their colleges and graduate schools evidently did not teach them — certain important skills, attitudes, and values involved in being good citizens and successful leaders.
How might the assessments colleges use in admissions decisions better reflect the kinds of qualities that matter not just during a student’s education, but throughout life? At the Center for the Psychology of Abilities, Competencies, and Expertise, originally located at Yale University and now at Tufts, as the PACE Center, we have collaborated with teams across the country to develop assessments that measure some of the relevant skills. The model that underlies our assessments is called WICS, an acronym for “wisdom, intelligence, and creativity, synthesized” — although wisdom was not initially part of my theory.
The basic idea of the model is that active and engaged citizenship — as well as leadership — requires individuals to synthesize creative skills, to produce a vision for how they intend to make the world a better place for everyone; analytical intellectual skills, to assess their vision and those of others; practical intellectual skills, to carry out their vision and persuade people of its value; and wisdom, to ensure that their vision is not a selfish one. Can we apply the model to assessments that can be used in college and university admissions?
Typical admissions assessments are useful but narrow measures of ability and achievement. For example, the SAT Reasoning and Subject Tests primarily evaluate students’ remembered knowledge, and the analytical skills they can apply to that knowledge. Creativity, practical thinking, and wisdom are assessed minimally, if at all, on conventional standardized tests.
My collaborators and I hoped to broaden the scope of high-stakes admissions assessments. In our Rainbow Project — a study supported financially by the College Board — we designed questions to measure analytical, creative, and practical thinking that could be used to supplement tests such as the SAT Reasoning Test, which measures analytical skills in the verbal and mathematical domains. We brought together a large, national team of investigators with diverse points of view, including both supporters and critics of conventional college-admissions assessments. We tested 1,013 high-school students and college freshmen from 15 schools and colleges. We gave them analytical questions much like those found on traditional standardized tests, but we also had them answer creative and practical questions.
The creative tests got students to stretch their imaginations. For example, they might be asked to write a creative story to go with a title like “The Octopus’s Sneakers” or “3821.” Or they might be shown a collage of pictures, perhaps of musicians or athletes, and be asked to tell an oral story based on the collage. Or they might be asked to think up a caption for an untitled cartoon.
The practical tests required students to solve everyday problems, presented in written form or in videos. One video showed a student approaching a professor to ask him to write a letter of recommendation, but stopping as the professor’s blank expression indicated he did not know who the student was. The test was to decide what the student should do next. Another video showed a group of friends trying to figure out how to move a large bed up a winding staircase.
How can we evaluate answers to questions that seem so subjective? Our approach is to use well-developed rubrics. We assess analytical responses based on how sound, balanced, logical, and organized they are. We evaluate creative responses’ originality, quality, and appropriateness to the task presented to the students. With practical responses, we look at their feasibility with respect to time, place, and human and material resources. We assess wisdom-based responses according to how much they promote a common good by balancing the student’s and others’ interests — as well as larger institutional or global interests — in both the long and short terms, through the use of positive values.
The results of the Rainbow Project suggest that we can assess skills beyond those involved in critical reasoning. We had predicted that the tests would show how well students did at creative and practical thinking; we were right.
The data also demonstrated that solving multiple-choice problems is a stable skill. In other words, students who did better on one multiple-choice question tended to do better on others as well, no matter what the questions were supposed to measure. Thus, using a multiple-choice format consistently seems to benefit some students.
We found that using broader tests for admissions made it easier to measure students’ probable academic success. Our tests’ predictions of students’ grades as college freshmen were twice as good as predictions based on the students’ SAT scores alone, and 50 percent better than predictions relying on high-school grade-point averages combined with SAT scores. In other words, our tests were not quixotic ventures into esoteric realms.
In addition, we discovered that using our tests significantly reduced ethnic-group differences in scores. Different groups sometimes have different concepts of intelligence and socialize their children to be intelligent in different ways. For instance, on our tests, American Indians over all performed worse than members of most other groups on analytical assessments, but had the highest average score on questions that involved oral storytelling.
Using such tests could mean not only admitting more members of minority groups to selective colleges, but also improving the academic quality of the student body — not compromising academic excellence. Different groups excel, on average, in different ways, and giving students a chance to show how they excel also gives them the opportunity to show that they can succeed. Our results are consistent with those of Claude M. Steele of Stanford University, who has shown that students may score less well on a conventional test of ability or achievement if they feel the test presumes low performance by groups to which they belong, and if their own stereotype about their group is activated.
Furthermore, the tests do not benefit only members of ethnic minorities. Many students who come from the majority group — even from well-off families — learn in ways that are different from those assessed by conventional standardized tests. Although those students may well have the abilities they need to succeed in college and throughout life, that fact may not be reflected in their scores on such tests. Our tests measure such students’ aptitudes more comprehensively.
In 2005 I moved from Yale University to Tufts University, where I became dean of the School of Arts and Sciences. Tufts has strongly emphasized the role of active citizenship in education, so it seemed like an ideal setting to put into practice some of the ideas from the Rainbow Project. In collaboration with Lee Coffin, dean of undergraduate admissions, who had primary responsibility for executing the project, and with the cooperation of Linda Abriola, dean of engineering, I began Project Kaleidoscope. It puts into use the earlier project’s ideas and also assesses the construct of wisdom.
On the 2006-7 application for the two undergraduate schools at Tufts — arts and sciences, and engineering — we introduced optional extended-response questions designed to measure creative, analytical, practical, and wise thinking. The 15,381 applicants for the Class of 2011 were invited to answer any one of the questions, thus allowing students to highlight one of their strengths and avoiding the usual pressure inherent in high-stakes tests that require students to answer many complex questions in very short amounts of time.
One creative question asked students to write a story with the title “Confessions of a Middle-School Bully”; in another example, students were asked to design an advertisement for a new product. One practical question asked students to describe how they had persuaded friends to share an unpopular belief. A wisdom question asked students how they could use a passion of theirs to benefit other people.
We also evaluated applicants in a new way. We not only looked at each student’s academic achievement, talent, and personal qualities, but we also considered his or her wisdom, creativity, and practical intellectual abilities.
The preliminary results under Project Kaleidoscope for the Tufts Class of 2011 are very promising. The quality of applicants on the whole rose substantially, as measured by the traditional indicators of grades, class rank, standardized-test scores, and curricular rigor. Considered by any standard that we have used, both the academic and the personal credentials of the applicant pool were the best we have seen.
The mean SAT scores of accepted students are at an all-time high for Tufts. The new assessments are not negatively correlated with SAT scores; rather, they are not much correlated at all. Thus, adopting the new measurements does not result in less-qualified applicants being admitted. Indeed, the applicants admitted to Tufts during this pilot program are more qualified, considered in broader terms. Perhaps most rewarding were the comments from many applicants that our application, unlike typical forms, had given them a chance to show their true strengths.
With regard to diversity, it appears that Project Kaleidoscope helped us identify and admit substantially more qualified students of color. For example, we admitted roughly 30 percent more African-American students than the year before, and 15 percent more Hispanic students. Greatly increased financial-aid resources, made possible as a result of the generosity of our alumni donors, contributed to enabling those students to matriculate.
As was the case with the Rainbow Project, we showed that it is possible to increase academic quality — defined in multiple ways — and diversity simultaneously. Furthermore, we have now done so for an entire undergraduate class at a major university, not just for samples of students at a small number of colleges. We hope that next year’s results will match this year’s.
Moreover, we have sent an important message to students, parents, high-school guidance counselors, and others: We believe that there is more to a person than just the narrow spectrum of skills assessed by traditional standardized tests. Many colleges, including Tufts, have sought to admit students on the basis of broader skills; Project Kaleidoscope allowed us to highlight the additional skills and to assess them in a quantifiable way.
Once students are admitted through such a program, they should be taught in ways that match their strengths. Otherwise, their diverse learning skills may not serve them well in college. At Tufts, we have created a new center that offers seminars for faculty members on teaching, aimed at the same skills we highlighted in the admissions process. Those who participated in the seminar’s first year gave it, on average, an almost-perfect rating.
In a related research project in which I collaborated at Yale — with Steven Stemler, Elena L. Grigorenko, and Linda Jarvin, and supported by the Educational Testing Service and the College Board — we applied the same principles to high-stakes achievement testing used for college admissions and placement. We modified advanced-placement tests in psychology and statistics additionally to assess analytical, creative, and practical skills.
In psychology, for instance, we gave students four ways to discuss sleep. After a brief introduction (“A variety of explanations have been proposed to account for why people sleep.”), we presented four tasks (the type of task is indicated in parentheses):
-
Describe the restorative theory of sleep (memory).
-
An alternative theory is an evolutionary theory of sleep, sometimes referred to as the “preservation and protection” theory. Describe this theory and compare and contrast it with the restorative theory. State what you see as two strong points and two weak points of this theory compared to the restorative theory (analytical).
-
How might you design an experiment to test the restorative theory of sleep? Briefly describe the experiment, including the participants, materials, procedures, and design (creative).
- A friend informs you that she is having trouble sleeping. Based on your knowledge of sleep, what kinds of helpful — and health-promoting — suggestions might you give her to help her fall asleep at night (practical)?
We found that by asking such questions, as in the other studies, we were able to both increase the range of skills we tested and substantially reduce ethnic-group differences in test scores.
The ways in which we assess students’ knowledge and skills have not changed much over the past 100 years. Perhaps such assessments met the cognitive demands placed on students a century ago, but they do not meet the cognitive demands of the world today. Active and engaged citizens now must be able to: be flexible, responding to rapid changes in the environment; think critically about what they are told in the media, whether by newscasters, politicians, advertisers, scientists, or others; carry out their ideas and persuade others of their value; and, most of all, have the wisdom to avoid becoming bad leaders.
It is time for assessments, as well as instruction, that reflect the demands of the current era.
The assessments I have described do not measure all the skills required for success in everyday life. For example, although I assess teamwork in courses I teach, the assessments mentioned here do not measure that skill — at least, not directly. Moreover, making the assessments usable on a statewide or nationwide basis would no doubt present new challenges. And, naturally, expanded assessments would cost more time and money.
But when we consider the benefits of broadening the horizons of students — of both genders and all ethnic backgrounds — who learn and think in different ways, the costs no longer seem too large. Our society does not only need people who can analyze and memorize well; even more important are citizens and leaders who are also creative, practical, and wise. One way to develop those talents is to consider in college admissions and instruction the broad range of skills behind good citizenship and leadership.
Robert J. Sternberg is dean of the School of Arts and Sciences, a professor of psychology, and an adjunct professor of education at Tufts University. His books include Wisdom, Intelligence, and Creativity Synthesized (Cambridge University Press, 2003).
http://chronicle.com Section: The Chronicle Review Volume 53, Issue 44, Page B11