Scientists filling test tubes in a laboratory

commentary

(The Guardian)

December 8, 2014

Measuring Impact: How Australia and the UK Are Tackling Research Assessment

Photo by vm/iStock

by Catriona Manville

In the run-up to the release later this month of the Research Excellence Framework (REF) 2014 results, the eyes of research evaluators worldwide are on the UK.

The REF 2014 is a UK-wide initiative to assess the quality of research in UK higher education institutions. Similar assessments of academic research excellence have been conducted in the UK approximately every five years since 1986 and the results are used to allocate research funds to universities.

This year, research will be assessed on the basis of three main criteria: the quality of research outputs, the vitality of the research environment and the wider impact of research. In the 2014 REF, the wider impact of research was given a weighting of 20% of the total assessment; there are calls to increase this weighting (PDF) to 25% in the future.

The UK is the first country to attempt to allocate funding based on the wider societal impact of research. Other countries have thought about it, and in 2012 a subset of higher education institutions in Australia ran a small-scale pilot exercise to assess impact and understand the potential challenges of the process: the Excellence in innovation for Australia impact assessment trial (EIA).

RAND Europe evaluated the EIA, and is currently engaged in a process evaluation of the impact element of REF 2014. What can we learn by comparing the UK and Australian approaches?

Time lags: does some research take longer to prove its effect?

Impact was defined by the REF and the EIA in essentially similar terms: both sought to understand a research project's wider social, cultural, economic and environmental benefits. But impact can take a long time to occur after research is conducted. Both the REF and the EIA recognised this, and in both exercises the case studies could be built on research that pre-dated impact by up to 15 years, or more if there was a special reason in the EIA.

However, we don't really know what the norms are or should be; RAND Europe has explored the time lag from research finding to impact within different areas of biomedical science and found that it is around 15–20 years.

Discipline or impact area?

In the UK, case studies were assessed within their discipline, and authors could claim multiple types of impact within a case study. In contrast, the Australian pilot was focused around types of impact, defined as “socio-economic objectives”, for example an impact on defence, or an impact on the economy. The emphasis on assessing different types of impact separately may limit the extent of the impact described, and pre-judges what types of impact are eligible for assessment.

Different perspectives: industry and charities

Both exercises drew their assessors both inside and outside higher education because it is vital that research users — in industry, businesses and charitable organisations — contribute their view of impact.

In both the REF and the EIA, all impact case studies were reviewed by an external research user. However, the exercises employed different proportions of research users among those assessing impact: 70% in Australia but only 32% in the REF. This may reflect the scale — the EIA had 162 case studies for review, whereas the REF assessed 6,975 case studies. A combination of perspectives is clearly necessary, but the ideal balance is a topic for debate in future assessments.

Case studies

Case study templates for describing impact were different between the UK and Australia in terms of the order in which information was requested. In the EIA, the impact section came up front, followed by the supporting research; whereas the REF template took a more linear or chronological narrative, focusing on the underpinning research before describing the resulting impact. Neither approach seems perfect. Thinking about impact first may narrow the range of examples that are generated; while a linear narrative cannot capture feedback loops that bring impacts back round to inform next steps in research.

Reliability of assessments

Finally, we offer some reassurance to counter researchers' uncertainty about how assessment panels can compare the huge variety of case studies. Based on our work on the EIA and assessors' own reports on the 2010 REF pilot (PDF), assessment panels are able to account for factors such as the quality of evidence, context and situation in which the impact was occurring — and even the quality of the writing — to differentiate between, and grade, case studies.


Catriona Manville is a senior analyst in the innovation and technology policy team at RAND Europe, Cambridge, UK.

This commentary originally appeared on The Guardian on December 7, 2014. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.