The TEF is a waste of time and money

22 June 2016

Version of an article published in Research Fortnight, June 2016:

Just over a year ago I wrote in Research Fortnight about the proposed Teaching Excellence Framework. I contended that a TEF based on metrics was unlikely to make sense unless it was part of a broader system that included a substantial proportion of expert review. But such reviews would impose a material opportunity cost on universities. Given the generally high standards of teaching in UK higher education, the existence of a sound measure of quality that was broadly supported across the sector (the NSS) and other evidence of largely positive student experiences and outcomes, a TEF would seem to be a pointless additional burden.

There would also be the small matter that it might not work. When a very similar scheme, linked to marginal funding, was tried in Australia, it did not last for long. In this case, despite the existence of an expert panel to moderate the results, chaired by probably the best person to do it (the late Sir David Watson) the variations between universities were usually minor and rarely statistically significant. The scores obtained and the resultant funding variations remained highly controversial. Ultimately the system lacked credibility and was quietly dropped.

In the face of the evidence, why are we going ahead with a TEF? Governments are less interested in evidence than in selecting soundbites to support their agendas. The fact that we already provide an outstandingly good higher education experience for most students does not fit the narrative that universities must be continually pressured to fix imagined weaknesses. On this account, the problems of a metrics-based system moderated by an expert panel, so well revealed by the Australian experience, can be dismissed and the negative comments made in the consultation can subjected to the usual excuse that ‘maybe more research is needed – so we will proceed a bit more slowly’.

The White Paper itself provides an extraordinarily weak justification for the TEF. Its authors set the scene by covering familiar yet questionable ground – assertions that students need better information about teaching quality (disputable at best, particularly if it is not at discipline level), that robust measures of teaching quality in HE are not available (untrue), that student satisfaction is important (hardly – it is the quality of learning experiences and outcomes that matter, ‘satisfaction’ being a mere by-product), that contact hours are key to a better experience/greater satisfaction (definitely wrong – did they not look at the Open University?), and that there is a problem with social mobility that the TEF will help address (even if there were such a problem, why should a teaching assessment exercise address it?).

After this poor start, things get better with the proposed metrics. The TEF will initially use three indicators: a composite (calculation unspecified) from NSS data from teaching on course, assessment and feedback, and academic support questions; non-continuation rates; and employment or further study rates. These are standard measures and if the data are properly benchmarked they promise to provide reasonable validity. However, they are hardly without problems. The NSS was never designed to operate at institution level. Even at discipline level, courses have differing profiles of scores on the NSS dimensions; they can be strong on ‘teaching’ and weak on ‘assessment and feedback’ for example — or vice-versa. Combining the measures will simply reduce the size of any differences.

Variations between institutions on a single summative criterion are likely to be minimal. Expect some major differences of opinion and probable challenges to the nice judgements made at the margins between ‘meets expectations’, ‘excellent’ and ‘outstanding’. A quantity that led to labelling a few institutions at the extremes as good or bad might be a better bet; as an Office for National Statistics report on the data sources for the TEF tentatively puts it, it ‘may be possible to identify a small number of institutions which are significantly different, that is significantly better or worse’. The same report argues that the NSS and the retention and destinations data all suffer from the need to define the target population more clearly, to determine the extent of under and over-coverage, and to adjust for non-reponse.

An altogether better effort than the White Paper’s attempt to vindicate the TEF is the technical consultation on the assessment criteria for it. Still in draft form, of course, it has a properly thought out framework (Teaching Quality; Learning Environment; Outcomes and Learning Gain) with criteria and evidence clearly delineated for each. I am tempted to suggest, though, that the principles have been derived from the existence of suitable indicators rather than the other way round. For some inexplicable reason, the NSS questions on personal development, such as improved communication skills, are not included as one of the indicators of learning outcomes and gain. Curiously, the proportion of HEIs falling into the three performance categories is pre-defined (‘norm-referenced’), with 50 to 60 percent of institutions in the middle or average category (although it is labelled ‘excellent’).

The role of assessors and the expert panel, who will be very busy people, is admirably described and detailed. Indeed the proposed assessment process shows just how heavy the additional work and expense will be. As well as the time and payment of many assessors, each HEI will be invited to submit additional evidence (the ‘provider submission’) on matters such as leadership to support teaching excellence, local surveys, teaching and research links, curriculum to involve professional bodies … and many others.

It is inevitable that the labour involved in preparing provider statements and deciding what will go into them will reduce an institution’s capacity in teaching, research and adminstration. If the TEF survives the many obstacles in its way, we may hope for a rigorous evaluation of its impact on teaching quality and learning outcomes. This should incorporate a balance sheet that weighs its considerable costs against any improvements. But I for one will not be holding my breath for it.


Why England doesn’t need a ‘Teaching REF’

6 July 2015

The Conservative manifesto included a commitment to improve higher education teaching, promising: “We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality.”

This statement has been interpreted as a first step towards linking funding to teaching quality—the possibility of a teaching equivalent of the Research Excellence Framework.

Internationally there is nothing unusual about connecting marginal or even primary state funding to educational performance in universities; Austria, Finland, the Czech Republic, Sweden and Norway are among the many countries that do it. The quality and internationalisation of teaching determine as much as 25 per cent of the total funds that Norwegian universities receive, for example.

I have long argued that university teaching should be evaluated in a similar way to research. And, in theory, a teaching REF would seem feasible. It could use expert academic review to look at the significance, originality and rigour of curricula, assessment and teaching, as well as learning outcomes and graduate success. It could also examine whether a programme’s structure and teaching methods had had a positive impact on similar programmes in other institutions. It could assess the quality of the environment for teaching, particularly the effectiveness of leadership and management in making good teaching possible. Many aspects of the judgement of quality would be informed by the Higher Education Funding Council for England’s excellent data on teaching benchmarks, covering matters such as student experiences, retention and graduate destinations.

However, it is not clear that this process is what the present government implies by its “framework to recognise universities offering the highest teaching quality”. If it were to follow the line taken by the previous coalition, it would venture into the realm of new measures of “learning gain”, an approach particularly endorsed by David Willetts, the former universities minister. These measures would then probably be combined with existing ones, such as the National Student Survey, to deliver a component of teaching funding by formula.

Additional measures of “learning gain” imply that universities’ current assessment processes fail adequately to assess generic graduate competences such as critical thinking and problem solving. To appraise these separately would require tests at the beginning and at the end of undergraduate programmes. These could include the Collegiate Learning Assessment, which is used mainly by American institutions to benchmark their students’ performance, and the Assessment of Higher Education Learning Outcomes, an initiative from the OECD.

It is highly unlikely that universities would accept the results from any of these tests as a valid measure of quality. Moreover the additional burden on students of undertaking supplementary examinations related to what would effectively be a national curriculum would be intolerable. In any case, the National Student Survey already acts as a proxy for learning gain, since students who report more positive experiences gain better degrees (even after controlling for entry scores).

Still less palatable than extra tests would be a formula-based allocation of funding using existing measures such as student experience and graduate destinations information.

Australia took this approach in the late 2000s, but the system was widely criticised for methodological weaknesses, such as an ordinal ranking process in which funding variations were determined by differences smaller than the error of the measurements. Although the Australian performance-based system was ultimately derived from a successful scheme that I introduced at the University of Sydney, it omitted crucial components. At Sydney, the funding allocation was only one part of a suite of initiatives to improve teaching and help attract high-performing students. Critically this coherent system included a collegial, academic-driven process of expert review of each faculty’s educational effectiveness.

A teaching evaluation framework that used peer review and was consistent in important respects with the REF would be more acceptable. However the expense and opportunity cost would be considerable. Why add to the burden of quality assurance at national level? It is not as if the quality of university teaching and student experiences are unsatisfactory; on the contrary, they are good in absolute terms and constantly improving.

British universities already have solid teaching quality measures. The National Student Survey in particular has probably been the most effective and best value single policy initiative in the area of improving the UK student experience in the past 10 years. In last year’s review of the National Student Survey, we noted a widespread perception that it had had a profound impact on universities’ commitment to improving learning and teaching. Its results are taken seriously, are built into internal planning and review systems and are responsible for tangible improvements in all institutions. It has not been necessary to link the results to national funding to derive these benefits.

A final consideration weighing against a teaching REF is the impact of other incentives for institutions to provide an excellent student experience. These include the removal of caps on student numbers and, potentially, a more relaxed approach to the setting of fees. Rather than trust the government to introduce an additional framework of control, the nation might instead rely on students to choose wisely and for universities to respond by continuing to compete vigorously for their custom through providing higher quality experiences.

Paul Ramsden is a key associate of PhillipsKPA, an educational consultancy based in Melbourne, Australia, and a visiting professor at UCL Institute of Education, London.

Published in Research Fortnight, 30 May 2015.

Career achievement award

6 January 2015

This site already contains far too much self-promotion. However, I cannot resist saying how delighted I am to have been awarded the Office for Learning and Teaching’s Career Achievement Award.

The award ceremony was held at Parliament House, Canberra, on 9 December 2014. Details here.

Executive leadership for research development

6 January 2015

We have published a resource for senior university leaders, particularly those charged with improving research performance in less research-intensive higher education institutions: see this link.

It’s Offal

9 February 2012

A row has blown up about a new appointment in the government’s gift: head of the Office of Fair Access, which can fine universities for not admitting the ‘right’ kinds of students.

Don’t get me wrong – I admire Les Ebdon as an outstanding educator and Vice Chancellor (though I think he’s wrong about student fees).

But why is there such a non-job as ‘university access tsar’ in the first place? ‘There were thought to have been few applicants for the role’ – hmm, people aren’t as stupid as Vince Cable thinks.

Higher education needs an access regulator like it needs a hole in the head. The government thinks that it can solve issues of social mobility by bullying universities. It hasn’t the faintest idea of how university admissions operate. Universities don’t care who their students’ parents are. They have a vested interest in fair access already, for goodness sake. The reason why more kids from poor families don’t go to very selective universities, or any university, is that many of them get a truly lousy education at school.

And we already have plenty of universities that admit people with low or no qualifications. We demean their efforts and their academics’ commitment to students by implying that their students have a sub-standard experience. OFFA is essentially a bastion of elitism, talking up ‘top’ universities as if only they can provide a decent higher education.

Don’t expect it to be abolished soon. It touches all the right statist buttons beloved by the Coalition and the mob who made such a mess before them – bureaucratic, top-down, meddling, controlling, patronising, wasteful of public money, arrogant and vague.

Welcome to England

30 November 2011

Just over a week ago I returned to the UK from a business trip to Australia.

On arrival in Sydney I had presented my Australian passport and been greeted with a smiling ‘welcome home’ from the person behind the desk. How nice!

What a contrast at Gatwick. The first sight was a line of police in paramilitary gear with dogs and submachine guns shouting aggressively at the bewildered passengers to get into single file.

One dog, bless him, showed passing interest in the choccy I’d brought from the plane, but was sharply hauled back by his guard. It’s a dog’s life in the UK border police force.

Then the gruff individual at the passport desk spotted I had another passport in my folder (I always carry them together). Yes, I said, I have an Australian passport as well. He then demanded to see it – ‘to check it’s the same person’. Meekly, after 27 hours on planes, I showed it.  But by what conceivable right should he require to see the passport of a national of another country?

So that’s my two countries. One that welcomes its citizens home. The other that grumpily disbelieves and thinks it’s clever to bully them.

This earth, this realm, this interfering and threatening land…

Escaping the tyranny of contact hours

11 August 2011

Click on the link below to read my feature critical of “contact hours” and the government’s latest sport of infantilising the student experience. It appeared in the Times Higher on Thursday 11 August 2011. 

When I grow up, I want to be spoon-fed.

There is a text version to download here (Word doc): THE feature 2011 Ramsden2