The first phase of an ambitious international study that intends to assess and compare learning outcomes in higher-education systems around the world was announced here on Wednesday at the conference of the Council for Higher Education Accreditation.
The study, called the Assessment of Higher Education Learning Outcomes, is a project of the Organisation for Economic Cooperation and Development.
Richard Yelland, of the OECD’s Education Directorate, is leading the project, which he said expects to eventually offer faculty members, students, and governments “a more balanced assessment of higher-education quality” across the organization’s 31 member countries.
After decades of quantitative growth in higher education, learning outcomes are becoming a central focus worldwide, Mr. Yelland noted in his presentation. For example, one aspect of the Bologna Process, through which a number of European nations are harmonizing their degree systems, involves defining learning outcomes. Learning-outcome measures are also increasingly being incorporated into quality-assurance mechanisms.
“Consensus is emerging on the need to improve and ensure quality for all,” Mr. Yelland said.
The OECD project’s first phase, or “strand,” will be a feasibility study focused on developing learning measures.
Measuring General Skills
To determine to what extent generic skills can be measured across diverse institutions, languages and cultures, the feasibility study is adapting the Collegiate Learning Assessment, an instrument developed by the Council for Aid to Education in the United States, to an international context. The online assessment will seek to measure generic skills, such as problem solving, critical thinking, and practical application of theory. The questions are not specialized, so that they can be answered by most undergraduates, regardless of their field of study.
At least six nations are participating in the feasibility study. At least 14 countries are expected to participate in the full project, with an average of 10 institutions per country and about 200 students per institution, so that the sample size will consist of about 30,000 students.
Mr. Yelland’s presentation drew several questions from the audience.
“I’m skeptical about some of the instruments that can be used for this analysis,” Eduardo Marçal Grilo, a former minister of education for Portugal who represents a European foundation that he said is thinking of providing financial backing for the project.
“The problem is not to evaluate, but to do a comparison,” he said. The project’s target population will be students nearing the end of three-year or four-year degrees, and will eventually measure student knowledge in economics and engineering. Even though it will take into account students’ backgrounds and national differences, Mr. Grilo said, and he wondered how you could effectively compare mechanical-engineering students from institutions in Britain, Italy, Switzerland, and the United States.
Mr. Yelland replied that this first phase of the project will involve determining just which common factors exist. Because its focus is on general skills, “part of it is going above content, to look at the way in which engineers actually use the knowledge they have,” he said.
Another member of the audience worried about the implications for the autonomy of higher-education institutions, with so much discussion about producing a tool for comparing institutions. “If it is voluntary, why would any higher-education institution, any top institution, agree to use this tool if they think it is not in their best interest? Why would they want to take that risk?”
More Than a Ranking System
Mr. Yelland said in an interview that he thought much of the skepticism about the project stemmed from concerns that the project would end up being just another ranking system. “This isn’t going to be a ranking,” he said categorically. “It is so much more. If we manage to produce reliable data, some people may well turn it into rankings, but that is not what this is about.”
Karine Tremblay of OECD, who is coordinating the first phase of the project, noted that all of the existing rankings of universities are based on available data, although they vary in which criteria they emphasize. The OECD’s project will provide new measurements whose object, she said, is not to offer the kind of snapshot assessment that rankings do, but to provide institutions with useful feedback. “If you think of a ranking as a house, this would allow improving the quality of the bricks,” she said.
While the goal of the project is not to produce another global ranking of universities, the growing preoccupation with such lists has crystallized what Mr. Yelland described as the urgency of pinning down what exactly it is that most of the world’s universities are teaching and how well they are doing it. “This is not about the top 100. There will only ever be 100 institutions in the top 100,” he said. Rather, the project is about “shining a light where no light currently exists,” into how the rest of the estimated 20,000 higher-education institutions in the world fare in teaching.
Mr. Grilo, despite his skepticism, said he thought the project was “worthwhile in itself” for the information it would generate. Others shared the view that it could yield valuable insights.
Judith S. Eaton, president of the Council for Higher Education Accreditation, said she was also skeptical about whether the project would eventually yield common international assessment mechanisms. But she added that “no matter what, there will be gains for the academic community.”
As higher education becomes increasingly globalized, Ms. Eaton noted, the same sets of issues recur across borders and systems, about how best to enhance student learning and strengthen economic development and international competitiveness. Whatever it ends up yielding, she said, the OECD project is at the center of an emerging “international higher-education space and an international quality-assessment community.”