The Conservative manifesto included a commitment to improve higher education teaching, promising: “We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality.”
This statement has been interpreted as a first step towards linking funding to teaching quality—the possibility of a teaching equivalent of the Research Excellence Framework.
Internationally there is nothing unusual about connecting marginal or even primary state funding to educational performance in universities; Austria, Finland, the Czech Republic, Sweden and Norway are among the many countries that do it. The quality and internationalisation of teaching determine as much as 25 per cent of the total funds that Norwegian universities receive, for example.
I have long argued that university teaching should be evaluated in a similar way to research. And, in theory, a teaching REF would seem feasible. It could use expert academic review to look at the significance, originality and rigour of curricula, assessment and teaching, as well as learning outcomes and graduate success. It could also examine whether a programme’s structure and teaching methods had had a positive impact on similar programmes in other institutions. It could assess the quality of the environment for teaching, particularly the effectiveness of leadership and management in making good teaching possible. Many aspects of the judgement of quality would be informed by the Higher Education Funding Council for England’s excellent data on teaching benchmarks, covering matters such as student experiences, retention and graduate destinations.
However, it is not clear that this process is what the present government implies by its “framework to recognise universities offering the highest teaching quality”. If it were to follow the line taken by the previous coalition, it would venture into the realm of new measures of “learning gain”, an approach particularly endorsed by David Willetts, the former universities minister. These measures would then probably be combined with existing ones, such as the National Student Survey, to deliver a component of teaching funding by formula.
Additional measures of “learning gain” imply that universities’ current assessment processes fail adequately to assess generic graduate competences such as critical thinking and problem solving. To appraise these separately would require tests at the beginning and at the end of undergraduate programmes. These could include the Collegiate Learning Assessment, which is used mainly by American institutions to benchmark their students’ performance, and the Assessment of Higher Education Learning Outcomes, an initiative from the OECD.
It is highly unlikely that universities would accept the results from any of these tests as a valid measure of quality. Moreover the additional burden on students of undertaking supplementary examinations related to what would effectively be a national curriculum would be intolerable. In any case, the National Student Survey already acts as a proxy for learning gain, since students who report more positive experiences gain better degrees (even after controlling for entry scores).
Still less palatable than extra tests would be a formula-based allocation of funding using existing measures such as student experience and graduate destinations information.
Australia took this approach in the late 2000s, but the system was widely criticised for methodological weaknesses, such as an ordinal ranking process in which funding variations were determined by differences smaller than the error of the measurements. Although the Australian performance-based system was ultimately derived from a successful scheme that I introduced at the University of Sydney, it omitted crucial components. At Sydney, the funding allocation was only one part of a suite of initiatives to improve teaching and help attract high-performing students. Critically this coherent system included a collegial, academic-driven process of expert review of each faculty’s educational effectiveness.
A teaching evaluation framework that used peer review and was consistent in important respects with the REF would be more acceptable. However the expense and opportunity cost would be considerable. Why add to the burden of quality assurance at national level? It is not as if the quality of university teaching and student experiences are unsatisfactory; on the contrary, they are good in absolute terms and constantly improving.
British universities already have solid teaching quality measures. The National Student Survey in particular has probably been the most effective and best value single policy initiative in the area of improving the UK student experience in the past 10 years. In last year’s review of the National Student Survey, we noted a widespread perception that it had had a profound impact on universities’ commitment to improving learning and teaching. Its results are taken seriously, are built into internal planning and review systems and are responsible for tangible improvements in all institutions. It has not been necessary to link the results to national funding to derive these benefits.
A final consideration weighing against a teaching REF is the impact of other incentives for institutions to provide an excellent student experience. These include the removal of caps on student numbers and, potentially, a more relaxed approach to the setting of fees. Rather than trust the government to introduce an additional framework of control, the nation might instead rely on students to choose wisely and for universities to respond by continuing to compete vigorously for their custom through providing higher quality experiences.
Paul Ramsden is a key associate of PhillipsKPA, an educational consultancy based in Melbourne, Australia, and a visiting professor at UCL Institute of Education, London.
Published in Research Fortnight, 30 May 2015.