Strengthening Research Portfolio Evaluation at the Medical Research Council

Developing a Survey for the Collection of Information About Research Outputs

by Sharif Ismail, Jan Tiessen, Steven Wooding

This Article

RAND Health Quarterly, 2012; 1(4):15

Abstract

The Medical Research Council (MRC) wished to better understand the wider impact of MRC research output on society and the economy. The MRC wanted to compare the strengths of different types of funding and areas of research and identify the good news stories and successes it can learn from. As an initial step in this process RAND Europe: (1) examined the range of output and outcome information MRC already collected; and (2) used that analysis to suggest how data collection could be improved. This article outlines the approach taken to the second part of this exercise and focuses on the development of a new survey instrument to support the MRC’s data collection approach. Readers should bear in mind that some later stages of survey development and implementation were conducted exclusively by the MRC and are not reported here.

For more information, see RAND TR-743-MRC at https://www.rand.org/pubs/technical_reports/TR743.html

Full Text

The Project Brief

The MRC had three key aims for this project. It wished to:

  1. Collect information on the range of outputs and outcomes from its funded research, in a way that was amenable to detailed analysis. It also wanted to collect information on impacts from knowledge production, through research capacity building to wider outputs including dissemination, policy impact and product and intervention development.
  2. Build a better understanding of the range of research that it funds, across the spectrum from basic to clinical research.
  3. Collect a combination of quantitative and qualitative information—on both the types of impacts produced by MRC-funded research, and the perceptions of researchers themselves of the support they receive from the MRC.

What We Did

To help the MRC meet these objectives, RAND Europe was engaged to provide support in constructing an evaluation framework, building on an extensive body of research work in this field over the past few years. In particular, the project built on work jointly conducted by RAND Europe and the Health Economics Research Group (HERG) at Brunel University in recent years to develop a “Payback Framework” based on the following categories:

  • Knowledge production
  • Research targeting and capacity building
  • Informing policy and product development
  • Health and health sector benefits
  • Wider economic benefits.

In order to better reflect the particular needs of the MRC, and reflecting the focus on collecting information from the researchers carrying out the research, the project team decided to focus the development of the new tool on:

  • Research targeting and capacity building
  • A new category, for dissemination activities
  • Informing policy and product development. Data on other categories in the framework, while important, were thought to be more efficiently gathered by other means.

We then evaluated a series of potential approaches to data collection. These included:

  • Cataloguing tools, such as tick-list and menu-based approaches; exemplar scales; and calibrators
  • Mapping tools, including research pathways.*

In discussion with the MRC, it was decided that a tick-list based approach with additional questions to capture detailed information about research impacts was best suited to this exercise. These discussions took into account well known challenges in research evaluation. These included the issue of the accuracy of researcher recall in systems reliant on self-reporting of outputs and impacts; and the problem of attribution, which for researchers holding multiple forms of funding support at the same time, can be significant. There was also debate about the appropriate level of detail to request from researchers and how to balance the MRC's need for detailed information against the likely burden on researchers.

Building on a tool produced through prior work with the Arthritis Research Council (ARC) in the UK, we then adapted and developed a survey questionnaire to respond to the MRC's evaluation requirements. This tool was tested through an advisory group workshop, stakeholder workshops with academic researchers (both intra- and extra-mural) and finally through cognitive interviews with a series of researchers.

The MRC used this tested instrument, with additional questions, as a basis for its new online questionnaire (the MRC Outputs Data Gathering Tool—ODGT). The ODGT was to be directed at all MRC-supported researchers and research establishments, both intramural (MRC Institutes, Centres and Units) and extramural (research funded through grants, studentships and fellowships outside intramural establishments). The questionnaire sought information on both short-term outputs from individual research grants, and longer-term outcomes reported by interviewed researchers. RAND assisted in testing this survey tool with MRC-supported researchers before the ODGT was launched in September 2008. Details of the results of this exercise are available separately from the MRC.** The ODGT experienced problems in its first year of operation and this led the MRC to review and improve the IT implementation as well as to simplify the data collection tool, the new tool has been named MRC e-Val and was due to be used for the first time in the autumn of 2009.

Key Lessons Learned

Among the most important lessons from this project were the following:

  1. Consider the ultimate objective of your framework. Prior to developing a research evaluation framework it is crucial to define the ultimate purpose a framework should serve and to be very aware of the context it will operate in.
  2. Choose the right evaluation method. There exists a wide range of different evaluation methods, ranging from bibliometric analysis to micro- or macroeconomic analysis of the economic return of research. Each has a specific set of advantages and disadvantages, and the selection should be closely linked to the objective research.
  3. Be aware of the conceptual difficulties. Research evaluation exercises involve significant conceptual difficulties. While solutions to such problems are not necessarily easy to achieve, they should be at least acknowledged in the analysis of the results.
  4. Engage with stakeholders at every stage of the process of development. Stakeholder engagement can prove essential in developing a framework, as experienced during this project.

Notes

* For further details on these approaches, please see Steven Wooding and Stijn Hoorens, Possible approaches for evaluating Arthritis Research Campaign grants: A working paper, Cambridge, UK: RAND Europe, WR-662-ARC, 2009. As of 26th January 2012:
http://www.rand.org/pubs/working_papers/WR662.html

** The ODGT web page may be found here (as of 23rd July 2009): http://www.mrc.ac.uk/Achievementsimpact/Outputsoutcomes/index.htm

RAND Health Quarterly is produced by the RAND Corporation. ISSN 2162-8254.