The decision about which higher education institution to study at ranks among the more vital ones taken in life, but how those decisions are made – and the basis on which universities are ranked for their education – has now come under fire.
“There is no shortcut to measuring the quality of higher education that bypasses student learning outcomes. Some proxies may correlate, but it is for learning gains that we go to university.”
This was the crux of the debate Andreas Schleicher, director for education and skills and special advisor on education policy to the secretary-general of the Organisation for Economic Co-operation and Development or OECD, recently put forward.
This is an article from University World News, an online publication that covers global higher education. It is presented here under an agreement with The Chronicle.
Delivering his paper Value-Added: How do you measure whether universities are delivering for their students? to the 12th annual Higher Education Policy Institute, or HEPI, lecture, Schleicher said universities can now measure learning outcomes in appropriate, valid and reliable ways better than previously. Hence, making the extra effort was “worth it”. HEPI released a revised version of the address last week.
“Unless we measure learning outcomes, judgements about the quality of teaching and learning at higher education institutions will continue to be made on the basis of flawed international rankings, derived not from outcomes, not even outputs – but from idiosyncratic inputs and reputation surveys,” he says.
While universities had to cover outcomes in academic disciplines, it was more critical to assist employers and investigate transversal skills. This raised questions on achievability and methodological challenges and Schleicher does not shy away from the difficult politics such an approach creates.
Most to lose
He believes the biggest opposition to assessing learning outcomes will stem from countries and institutions with the most to lose if their outcomes were not as sound as their reputations suggested.
“When we look in the mirror, we may not appear as beautiful as we believed… [the opponents] have loud voices, but it just goes to show we should all be trying harder,” he says.
Schleicher says there are three reasons why measuring the outcome must be considered from an international angle. Higher education has become a global endeavour; there were opportunities to learn from diversity and there is an opportunity for capacity building.
Yet, the reason no country has undertaken these measurements is its difficulty. Schleicher believes that in pooling expertise and experiences internationally, higher education can achieve greater strides more swiftly.
Student enrolment in English universities during 2013 saw Chinese and Indian students being significant players, and these figures are predicted to grow. Estimates show that by 2030 around 40% of science, technology, engineering and mathematics graduates will be from China.
Yet, Asian universities do not have “a fair chance” to compete on any of the metrics currently used to judge their success. Looking at past reputation rather than current outcomes highlighted how no entrant could compete with the incumbents.
Schleicher says degrees and qualifications are signals, but questions their soundness in revealing what people know and what they can do with their knowledge. An adult skills survey in Italy revealed people with university qualifications had higher numeracy skills than those with school qualifications – but there remained “a surprising amount” of overlap.
“It is a cross-sectional picture with some people continuing to learn throughout their lives and others losing skills. Yet, it shows degrees and qualifications are not always a good predictor of the skills people currently have,” he says.
Which skills to test?
Those differences are increasingly pronounced across countries. Japanese high school graduates performed better than Italian university graduates on foundation skills like literacy and numeracy. Yet, Schleicher recognises, had other skills been tested, the outcomes may have painted a different picture.
“It goes to show there are things to learn from becoming better at measuring skills and knowledge rather than just looking at degrees that may have the same names, but not necessarily the same content across countries,” he says.
The upshot is questioning why measure learning gain internationally; how to measure learning gain in any format and how to know those measures actually reflect the quality of higher education learning outcomes.
In the past 30 years, higher education focus has shifted in response to a changing working environment. The rapid growth in jobs requiring higher order cognitive skills has created a global need for more graduate employees, forcing universities to shift from providing a minority with research capabilities to educating up to 50% of a nation with employable knowledge and skills.
Students are increasingly bearing the rising costs of higher education and consequently are more discriminating, valuing future employment opportunities. Institutions also compete to provide more relevant knowledge and skills and students are travelling abroad to study or using digital platforms to provide or supplement their learning.
Schleicher says this provides a powerful demand for data to measure the quality of teaching and learning such that higher education institutions can build on their strengths and address weaknesses.
Governments need data to determine policy and funding priorities; employers need data to assess the value of qualifications from different institutions and students need data to make informed decisions on their higher education institutions.
“There is a continuing and damaging absence of information for a quality to ground credible benchmarking and comparison… [comparative measures] allow governments to evaluate the quality of their university-educated human capital among the higher educated cohorts against international standards,” he says.
Measures of learning outcomes provide universities with “profound insights” into the effectiveness of teaching and learning and constitute a highly significant advance in the quality assurance environment.
How these measures are achieved is significantly more difficult. It involves questioning what skills are valued, measured and compared with universities discovering their own answers.
Labour demand is critical
However, labour demand is critical as things easy to teach and test are also easy to digitise, automate and outsource. Schleicher says one obvious answer is assessing learning outcomes in academic disciplines as these can be interpreted in the context of departments and faculties.
However, this poses its own challenges in requiring differentiated instruments reflecting international agreement and is likely to exclude areas not amenable to large-scale assessment or sufficiently invariant across cultures and languages.
“Students still learn mostly individually, but the more interdependent the world becomes, the more we rely on great collaborators and orchestrators able to join others in life, work and citizenship,” Schleicher says.
Consequently, an assessment of learning outcomes that goes beyond disciplines and includes transversal skills will allow a wider range of comparisons, but also requires reflecting cumulative learning outcomes and prior learning.
In conclusion, Schleicher says international measures for higher education learning outcomes demand several elements. They have to reflect central and enduring parts of higher education teaching relating to quality of outcomes; reflect aspects that can be improved and be appropriate across cultures, institutions and systems.
Balancing breadth and depth
There has to be a balance between breadth and depth of any metrics, avoiding tunnel vision and giving educators the depth to stimulate improvement. There also has to be a balance between outcomes and process as, while the design and implementation of an assessment is important, so is the process of communication to key stakeholders.
Providing faculties with meaningful feedback on the quality of teaching should be a central objective.
The outcome of an effective measurement process is the improvement of learning across every level of the university system. Measurement should be performance-based while establishing a means by which to evaluate the thinking processes students undergo to assess collaborative and social skills.
“Measurement today can make students’ thinking visible and allow for divergent thinking,” Schleicher concludes.