REF 2014 Shows That Research Impact Can Be Assessed

commentary

(Higher Education Funding Council for England Blog)

Newton's cradle

Photo by Frank Boston/Fotolia

by Catriona Manville

March 27, 2015

For the first time anywhere, the UK has allocated funding to universities according to an assessment of research impact. RAND Europe's evaluation of the Research Excellence Framework (REF) 2014 process reveals that it worked, allowing different types of impacts drawn from a wide range of disciplines to be compared and scored.

There are many reasons to measure the impact of research. These have been characterised as the '4 As': advocacy, accountability, analysis and allocation (Note 1). Previous impact assessment trials have been undertaken, such as the REF pilot (Note 2) in 2010 and the Excellence in Innovation for Australia trial (Note 3), but REF 2014 was the first instance where the assessment was performed on such a large scale and where scores are being used to allocate funding.

RAND Europe carried out an evaluation of the process whereby impact templates and case studies submitted by universities to REF 2014 were assessed by panels of academic peers and research users (Note 4). Panellists' perceptions of the process, by a large majority, were that the process enabled them to assess impact in a fair, reliable and robust way.

Panellists felt able to assess different types of impact. They noted that some types of impact were harder to assess, such as impacts on policy or public discourse, and impacts where, as a result of research findings, a potential change was not implemented. Often, difficulties arose due to a lack of evidence supporting the claims being made. This reflects the concern from universities that some types of impact would not be presented because it was harder to evidence them (Note 5). However, panellists emphasised that they were still able to make judgments in these cases.

Panellists felt that bringing together different perspectives of academics and research users was valuable and successful. It was widely agreed that the two perspectives moderated each other and added to the panellists' confidence in the process.

It is important to remember that the process was new, and therefore adjustments were inevitable. Through our evaluation we identified a number of incremental improvements that could be made to the process. These include: providing access for panellists to the underpinning research and corroborating evidence; applying the case study and impact template formats in a less stringent way; potentially using closed questions to confirm the eligibility of case studies; and offering clearer guidance on the process to universities and panellists (for example, on how to present eligibility information within the case studies, and whether impacts need to demonstrate both reach and significance).

In addition, we have flagged some areas for further consultation for subsequent REF exercises, or for other countries looking to implement a similar system. These include: how to manage variations in the way the process was conducted; how to avoid the risk of unsubstantiated and false claims being made; how to clarify the processes for assessing different kinds of impact; and how best to capture the information pertaining to the wider university environment for nurturing and developing impact.

REF 2014 provides a valuable working model from which other research systems can learn how to assess the impact of research, whether for allocation as in the UK, or for advocacy, analysis and accountability.


Catriona Manville is a senior analyst at RAND Europe in Cambridge.


Notes

  1. Morgan Jones, M. & J. Grant. 2013. 'Making the Grade: Methodologies for Assessing and Evidencing Research Impact.' (PDF) In 7 Essays on Impact, edited by Dean et al., 25-43. DESCRIBE project Report for Jisc. Exeter: University of Exeter.
  2. Technopolis. 2010. 'REF Research Impact Pilot Exercise Lessons-Learned Project: Feedback on Pilot Submissions — Final Report.' Brighton: Technopolis.
  3. Morgan Jones, M. et al. 2013. 'Assessing Research Impact: An International Review of the Excellence in Innovation for Australia Trial'. Santa Monica, Calif.: RAND Corporation. RR-278-ATN.
  4. Manville, C. et al. 2013. 'Asssessing impact submissions for REF 2014: An evaluation'. Santa Monica, Calif.: RAND Corporation. RR-1032-HEFCE.
  5. Manville, C. et al. 2013. 'Preparing impact submissions for REF 2014: An evaluation, Findings and observations.' Santa Monica, Calif.: RAND Corporation. RR-727-HEFCE.

This commentary originally appeared on Higher Education Funding Council for England Blog on March 27, 2015. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.