Photo by Tolga Akmen

Form-filling our way to excellence

British universities are ill-served by the Research Excellence Framework

Artillery Row

Last week, the results were announced of the Great University Handicap — otherwise called the Research Excellence Framework, or REF. In it the universities of the UK are graded on three things: the articles written by their academics, the impact of their research on the community, and the merits of their “research environment”. To a considerable extent, what hangs on these results is how much money each institution gets from the taxpayer.

This is an intermittent exercise going back nearly 40 years. It was started in 1986 by the Thatcher government, which believed — admittedly with some reason — that university teachers ranged from the brilliant and conscientious to the complacent and frankly indolent, a worrying number being of the latter. With falling higher education budgets, and an increasing view of money paid to colleges as investment by the community rather than support for scholarship, it seemed the obvious way to direct funds to the deserving. The present version is the eighth. Each has successively involved more bureaucracy (something I know from experience, having prepared a submission for the previous one).

The thinking is at least understandable. It may be misguided — as Joanna Williams rightly said in her Consuming Higher Education, turning college into a commodity you buy rather than something you participate in for self-improvement, tends to destroy it as genuine higher education. But let that pass. More to the point, for all the vast effort involved, it is increasingly doubtful what good REF actually does.

Innumerable templates grindingly detail what work is eligible

The effort involved is vast. No straightforward light-touch form-filling exercise this: instead, each university has to produce a kind of cross between an anthology of work, a public relations release and a large cross-referenced encyclopaedia describing its operation. Innumerable templates grindingly detail what work is eligible, how to treat academics who say their work ought (or ought not) to be submitted, definitions of matters such as impact and environment, and so on. Add demands for reams of formal statements, assurances, and codes of practice from institutions and departments, and you begin to see why this ties up a goodly proportion of the time of academics who could otherwise have been either teaching or doing research work.

For what? Take each strand of the REF in turn. Start with academic writing, known by the rather Gradgrindish term “outputs”, which have to be graded between four-star (“world-leading in terms of originality, significance and rigour”) and one-star (“recognised nationally in terms of originality, significance and rigour”), or remain ungraded. The problem here is how to assess them. 

In the solid sciences, life is slightly easier. Papers are often short and to the point; to an expert it is frequently clear if they break big new ground; and counting the later researchers who use their conclusions can provide a useful significance check. In arts and social sciences this may also happen (think, for instance, economic history and a useful informative analysis of interest rates in Ruritania between 1850 and 1860). But often it is different. Articles are longer; references to earlier writings are often en passant rather than a use of their conclusions; the subject-matter is moral, political or aesthetic rather than factual; and it is in the nature of academic debate in the arts that often your vital breakthrough is my inconsequential dead-end.

In practice, what matters here is often the composition of the committee of academics, appointed centrally (by the senior academics who run the bodies in charge of distributing research funds) for each subject. Although the authorities stoutly insist that these committees are absolutely impartial about all forms of academic output, it is an open secret that some kinds of writing, and some academic journals, tend to have had better prospects. Departments carefully watch who is on these committees and adjust their advice to academics accordingly.

Or turn to impact. It sounds good to say piously the state should be more willing to fund influential than inconsequential research. But while this may work up to a point with science (you may well be able to spot the research papers behind a cancer breakthrough, or a revolutionary manufacturing process), elsewhere things can get less easy. 

What is “impact” in social sciences? In practice it’s often a case of repeated efforts by departments to persuade some authority or quango to say it took into account one of its studies in constructing some outreach or equality programme. With history, for instance, contacts with the heritage industry certainly help. So — perhaps predictably — does getting someone to certify that you have engendered progress in social welfare, gender and sexuality issues, or attacked social injustice and inequality.

It’s hard to see it as either time or resources well spent

In assessing research environment, yet more difficulties emerge. Either an academic feels inspired or they don’t; a university department either gels or disintegrates. Unfortunately there’s no plausible way to measure intangibles like that. 

The result is predictable: a concentration on other things more readily measured, even if of less clear utility. Large grants from funders, for example, are welcome (as if the quality of a research culture depended on its expenditure). So are bureaucratic procedures for things like mentoring staff — the more detailed and meticulous the better. And, as if you hadn’t guessed, a good deal of attention goes to the ability to document a commitment to EDI (equality, diversity and inclusion), where the HR department probably has the necessary codes of practice, procedures and so on conveniently ready to hand.

For all the good faith of those involved, it’s hard to see this operation as either time or resources well spent, or something likely to improve either scholarship or student education. Obvious cases aside, estimating the quality of an academic article, or how inspiring a university department is to work in, remains very subjective and open to argument. Nor, at least in the social sciences, is it clear that we want to encourage the angling of research to catch the eye of public authority or quango bureaucrats to obtain a good “impact” score.

This leaves one question: if not this flawed exercise in creative paper-shuffling, what should inform decisions about public university subventions? We can’t simply entrust it to the judgement of some panel of the great and the good. Nor should we leave it in the hands of institutions themselves. If, for example, we put a premium on good degrees, or on admission of those not traditionally qualified, that will simply lead to discreet grade inflation or dumbing-down.

On this we have to remain open to ideas. But here’s an initial suggestion. Perhaps we should leave it up to the consumers, with a twist. How about making payments dependent on the grades of students the institution manages to attract to study, on the basis that the higher the grades a student has the more likelihood that they will call out nonsense and indolence? It’s an idea worth thinking about.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s most civilised magazine for £10

Subscribe
Critic magazine cover