The TEF is a waste of time and money

22 June 2016

Version of an article published in Research Fortnight, June 2016:

Just over a year ago I wrote in Research Fortnight about the proposed Teaching Excellence Framework. I contended that a TEF based on metrics was unlikely to make sense unless it was part of a broader system that included a substantial proportion of expert review. But such reviews would impose a material opportunity cost on universities. Given the generally high standards of teaching in UK higher education, the existence of a sound measure of quality that was broadly supported across the sector (the NSS) and other evidence of largely positive student experiences and outcomes, a TEF would seem to be a pointless additional burden.

There would also be the small matter that it might not work. When a very similar scheme, linked to marginal funding, was tried in Australia, it did not last for long. In this case, despite the existence of an expert panel to moderate the results, chaired by probably the best person to do it (the late Sir David Watson) the variations between universities were usually minor and rarely statistically significant. The scores obtained and the resultant funding variations remained highly controversial. Ultimately the system lacked credibility and was quietly dropped.

In the face of the evidence, why are we going ahead with a TEF? Governments are less interested in evidence than in selecting soundbites to support their agendas. The fact that we already provide an outstandingly good higher education experience for most students does not fit the narrative that universities must be continually pressured to fix imagined weaknesses. On this account, the problems of a metrics-based system moderated by an expert panel, so well revealed by the Australian experience, can be dismissed and the negative comments made in the consultation can subjected to the usual excuse that ‘maybe more research is needed – so we will proceed a bit more slowly’.

The White Paper itself provides an extraordinarily weak justification for the TEF. Its authors set the scene by covering familiar yet questionable ground – assertions that students need better information about teaching quality (disputable at best, particularly if it is not at discipline level), that robust measures of teaching quality in HE are not available (untrue), that student satisfaction is important (hardly – it is the quality of learning experiences and outcomes that matter, ‘satisfaction’ being a mere by-product), that contact hours are key to a better experience/greater satisfaction (definitely wrong – did they not look at the Open University?), and that there is a problem with social mobility that the TEF will help address (even if there were such a problem, why should a teaching assessment exercise address it?).

After this poor start, things get better with the proposed metrics. The TEF will initially use three indicators: a composite (calculation unspecified) from NSS data from teaching on course, assessment and feedback, and academic support questions; non-continuation rates; and employment or further study rates. These are standard measures and if the data are properly benchmarked they promise to provide reasonable validity. However, they are hardly without problems. The NSS was never designed to operate at institution level. Even at discipline level, courses have differing profiles of scores on the NSS dimensions; they can be strong on ‘teaching’ and weak on ‘assessment and feedback’ for example — or vice-versa. Combining the measures will simply reduce the size of any differences.

Variations between institutions on a single summative criterion are likely to be minimal. Expect some major differences of opinion and probable challenges to the nice judgements made at the margins between ‘meets expectations’, ‘excellent’ and ‘outstanding’. A quantity that led to labelling a few institutions at the extremes as good or bad might be a better bet; as an Office for National Statistics report on the data sources for the TEF tentatively puts it, it ‘may be possible to identify a small number of institutions which are significantly different, that is significantly better or worse’. The same report argues that the NSS and the retention and destinations data all suffer from the need to define the target population more clearly, to determine the extent of under and over-coverage, and to adjust for non-reponse.

An altogether better effort than the White Paper’s attempt to vindicate the TEF is the technical consultation on the assessment criteria for it. Still in draft form, of course, it has a properly thought out framework (Teaching Quality; Learning Environment; Outcomes and Learning Gain) with criteria and evidence clearly delineated for each. I am tempted to suggest, though, that the principles have been derived from the existence of suitable indicators rather than the other way round. For some inexplicable reason, the NSS questions on personal development, such as improved communication skills, are not included as one of the indicators of learning outcomes and gain. Curiously, the proportion of HEIs falling into the three performance categories is pre-defined (‘norm-referenced’), with 50 to 60 percent of institutions in the middle or average category (although it is labelled ‘excellent’).

The role of assessors and the expert panel, who will be very busy people, is admirably described and detailed. Indeed the proposed assessment process shows just how heavy the additional work and expense will be. As well as the time and payment of many assessors, each HEI will be invited to submit additional evidence (the ‘provider submission’) on matters such as leadership to support teaching excellence, local surveys, teaching and research links, curriculum to involve professional bodies … and many others.

It is inevitable that the labour involved in preparing provider statements and deciding what will go into them will reduce an institution’s capacity in teaching, research and adminstration. If the TEF survives the many obstacles in its way, we may hope for a rigorous evaluation of its impact on teaching quality and learning outcomes. This should incorporate a balance sheet that weighs its considerable costs against any improvements. But I for one will not be holding my breath for it.

Advertisements

Why England doesn’t need a ‘Teaching REF’

6 July 2015

The Conservative manifesto included a commitment to improve higher education teaching, promising: “We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality.”

This statement has been interpreted as a first step towards linking funding to teaching quality—the possibility of a teaching equivalent of the Research Excellence Framework.

Internationally there is nothing unusual about connecting marginal or even primary state funding to educational performance in universities; Austria, Finland, the Czech Republic, Sweden and Norway are among the many countries that do it. The quality and internationalisation of teaching determine as much as 25 per cent of the total funds that Norwegian universities receive, for example.

I have long argued that university teaching should be evaluated in a similar way to research. And, in theory, a teaching REF would seem feasible. It could use expert academic review to look at the significance, originality and rigour of curricula, assessment and teaching, as well as learning outcomes and graduate success. It could also examine whether a programme’s structure and teaching methods had had a positive impact on similar programmes in other institutions. It could assess the quality of the environment for teaching, particularly the effectiveness of leadership and management in making good teaching possible. Many aspects of the judgement of quality would be informed by the Higher Education Funding Council for England’s excellent data on teaching benchmarks, covering matters such as student experiences, retention and graduate destinations.

However, it is not clear that this process is what the present government implies by its “framework to recognise universities offering the highest teaching quality”. If it were to follow the line taken by the previous coalition, it would venture into the realm of new measures of “learning gain”, an approach particularly endorsed by David Willetts, the former universities minister. These measures would then probably be combined with existing ones, such as the National Student Survey, to deliver a component of teaching funding by formula.

Additional measures of “learning gain” imply that universities’ current assessment processes fail adequately to assess generic graduate competences such as critical thinking and problem solving. To appraise these separately would require tests at the beginning and at the end of undergraduate programmes. These could include the Collegiate Learning Assessment, which is used mainly by American institutions to benchmark their students’ performance, and the Assessment of Higher Education Learning Outcomes, an initiative from the OECD.

It is highly unlikely that universities would accept the results from any of these tests as a valid measure of quality. Moreover the additional burden on students of undertaking supplementary examinations related to what would effectively be a national curriculum would be intolerable. In any case, the National Student Survey already acts as a proxy for learning gain, since students who report more positive experiences gain better degrees (even after controlling for entry scores).

Still less palatable than extra tests would be a formula-based allocation of funding using existing measures such as student experience and graduate destinations information.

Australia took this approach in the late 2000s, but the system was widely criticised for methodological weaknesses, such as an ordinal ranking process in which funding variations were determined by differences smaller than the error of the measurements. Although the Australian performance-based system was ultimately derived from a successful scheme that I introduced at the University of Sydney, it omitted crucial components. At Sydney, the funding allocation was only one part of a suite of initiatives to improve teaching and help attract high-performing students. Critically this coherent system included a collegial, academic-driven process of expert review of each faculty’s educational effectiveness.

A teaching evaluation framework that used peer review and was consistent in important respects with the REF would be more acceptable. However the expense and opportunity cost would be considerable. Why add to the burden of quality assurance at national level? It is not as if the quality of university teaching and student experiences are unsatisfactory; on the contrary, they are good in absolute terms and constantly improving.

British universities already have solid teaching quality measures. The National Student Survey in particular has probably been the most effective and best value single policy initiative in the area of improving the UK student experience in the past 10 years. In last year’s review of the National Student Survey, we noted a widespread perception that it had had a profound impact on universities’ commitment to improving learning and teaching. Its results are taken seriously, are built into internal planning and review systems and are responsible for tangible improvements in all institutions. It has not been necessary to link the results to national funding to derive these benefits.

A final consideration weighing against a teaching REF is the impact of other incentives for institutions to provide an excellent student experience. These include the removal of caps on student numbers and, potentially, a more relaxed approach to the setting of fees. Rather than trust the government to introduce an additional framework of control, the nation might instead rely on students to choose wisely and for universities to respond by continuing to compete vigorously for their custom through providing higher quality experiences.

Paul Ramsden is a key associate of PhillipsKPA, an educational consultancy based in Melbourne, Australia, and a visiting professor at UCL Institute of Education, London.

Published in Research Fortnight, 30 May 2015.


Career achievement award

6 January 2015

This site already contains far too much self-promotion. However, I cannot resist saying how delighted I am to have been awarded the Office for Learning and Teaching’s Career Achievement Award.

The award ceremony was held at Parliament House, Canberra, on 9 December 2014. Details here.


Executive leadership for research development

6 January 2015

We have published a resource for senior university leaders, particularly those charged with improving research performance in less research-intensive higher education institutions: see this link.


Do we need a Higher Education Academy?

25 July 2014

Probably not any more, according to my recent piece for Research Fortnight (subscription required to view, but a version of it appears below):

Ten years ago the Higher Education Academy was set up as a single UK-wide organisation to support teaching and students’ learning experiences. It combined the Institute for Learning and Teaching in Higher Education, which pushed an ideological agenda that all lecturers should become registered teachers, and which had enjoyed a fairly disastrous reception across most of higher education, and the network of subject centres, which had been received much more favourably. Many of the subject centres were led by successful international scholars and they went with the grain of most academics’ beliefs, seeking to support them within their disciplines and accepting that teaching and learning were inseparable from research and subject content.

Combining two organisations with such different cultures and personnel was never going to be easy. Through a series of painful restructures and changes of senior staff, the Higher Education Academy eventually created a working model, but deep divisions remained under the surface and hampered the development of a coherent reputation and value for money. During the first few years many areas continued to be over-staffed and inefficient; there was always at least one interested external body that fought against their reform.

It faced other challenges too. First was the need to adapt services to the differing needs of the four home nations. It also had to cope with frequently changing funding council priorities and directly competing and much better-resourced initiatives such as the Centres for Excellence in Teaching and Learning, a scheme which ultimately failed to deliver its promised benefits. It had to win over divergent communities such as educational developers, Universities UK and professional subject organisations. And at the same time it had to establish standing as a credible research-led organisation.

The first evaluation of the HEA in 2007, by the Higher Education Funding Council for England, recognised that it had had to overcome major challenges in establishing itself as a distinctive organisation and balancing expectations from a wide-ranging and demanding set of stakeholders. It found that the academy had had a positive influence on teaching and the student experience, but that it had yet to realise its full potential. It had to work harder at engaging with partners and customers, in evaluating impact and value for money, in managing subject centres more consistently, and in campaigning for better learning and teaching.

But progress towards these goals continued to be hampered by irreconcilable vested interests. It became abundantly clear that proper change would require radical surgery, and this led to a plan for a leaner organisation and sweeping restructure. The subject centres, despite some significant successes, would have to go; they were simply too devolved for efficient management and much too expensive. The plan was implemented between 2010 and 2013 and the HEA has since become more efficient and more focused.

It came as something of a surprise to me, therefore, to see that a more recent review, published in June 2014, has identified issues that are familiar from several years ago. On the positive side, the HEA’s greatest success has undoubtedly been the establishment of the UK Professional Standards Framework for teaching in higher education, and its associated and expanding professional accreditation services. Other achievements of note include a continuing record of support for individual academics through their disciplines and a range of survey services.

But, according to its reviewers, it has yet to establish better communications with institutional leaders, particularly in pre-1992 universities. It has failed to demonstrate impact clearly enough and still tries to spread itself across too many areas. It has not had notable success in influencing policy, except in some key areas such as its pro-vice chancellor network, and the quality of its research is variable. It comes across as an organisation that is still more concerned with managing its internal tensions than meeting its customers’ needs.

What of the future? The Higher Education Academy announced in April that its public funding (which accounts for 95 per cent of its £16 million annual income) would end in 2016. Its business development model for a sustainable organisation involves increasing subscriber and consultancy income, but it has already fallen short of its targets in this respect. Its chances in a competitive environment for higher education consultancy must be regarded as slim, unless it can appoint staff with immediate experience of the realities and uncertainties of a private sector business model.

More fundamentally, do British universities need a Higher Education Academy any more? Higher education institutions have come a long way since 2004 in improving the quality of their students’ experiences and engagement. Australia abolished its equivalent organisation a couple of years ago. The British version provides services, knowledge and expertise that institutions think are important. However, these valued functions could be delivered by opening up the remaining market for specialist support services to a range of providers. A small office attached to the funding councils could support competitive tendering by firms and universities for projects. The day of a central, taxpayer-funded body to support the enhancement of teaching in higher education may well be over.

Paul Ramsden is a key associate of PhillipsKPA, an educational consultancy based in Melbourne, Australia. He was the founding chief executive of the Higher Education Academy from 2004 to 2009.

– See more at: https://www.researchprofessional.com/0/rr/he/views/2014/6/Is-the-HEA-fit-for-purpose-.html#sthash.bN11od6m.dpuf


Published: Review of NSS

4 July 2014

I have been involved in a review of the National Student Survey commissioned by the funding councils. The report came out this week.

Here’s a short summary of some of the conclusions that are not always explicit in the report:

1. A campaign to get the NSS dumped in favour of the U.S. National Survey of Student Engagement (NSSE) has failed. The NSS is valued, valid and impressively helpful as a way of enhancing teaching and the student experience. Universities and colleges don’t want to lose it. They don’t want it replaced by a survey that focuses on student engagement rather than on the quality of teaching.

2. The NSS is not a ‘satisfaction’ survey. It was designed as a student evaluation instrument (there is only one question about overall satisfaction).

3. A falsehood has been widely circulated that the NSS is not related to academic achievement or ‘learning gains’. Although the results have not been made public, it is certain that higher scores on the NSS are associated with better degree results, even after controlling for students’ entry qualifications.

4. Any modifications to the NSS will need to be carefully trialled and extensively tested to ensure that changes do not compromise the strengths of the survey and its considerable value to higher education institutions. Minor changes include the potential inclusion of a small number of extra questions about students’ engagement with quality processes and learning, many of which are already available in the optional set of questions.

5. The NSS has probably been the most effective and best value single policy initiative in the area of improving the UK student experience in the last 10 years.


Change of career!

20 December 2012

Having read the interesting ideas of Professor Howard Hotson about higher education policy, I realise I could benefit from a change of career.

Howard knows a lot about Early Modern Intellectual History and teaches at Oxford. He isn’t an expert on higher education. Although that hasn’t stopped him giving speeches about how it’s in a global crisis and writing about it for the Guardian.

Following Howard’s exemplary lead, I am going to stop talking about things I know about and shift to something I know nothing about.

I thought early modern intellectual history might do the trick.

Watch out for my forthcoming books and papers on Protestant Europe in this period. I rather fancy a special emphasis on international intellectual developments affecting Germany between 1555 and 1660. I know nothing about it at all.

I think I might supplement this with some stuff on traditions of intellectual innovations connecting late Renaissance humanism to the new philosophies of the 17th century; pretty much head to head with Howard, in fact.

One last thing, Howard. I’m already negotiating a generous advance on my next book on the revival of millenarianism in early modern Europe.