Report
Research-Portfolio Performance Metrics
Nov 12, 2019
Format | File Size | Notes |
---|---|---|
PDF file | 0.6 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Getty Images/Caiaimage/Sam Edwards
Considerable and often costly research is conducted with support from public and private organizations on a wide range of topics, such as health care, educational approaches, and manufacturing practices. It can address the fundamental sciences or be more applied.
Ascertaining whether the research yields concrete benefits poses a complex problem. A long time can pass between when research is completed and when the effects become apparent, and linking observed results to the underlying research is often difficult. Evaluating research can become even more complex when it is part of a portfolio—a body of research consisting of many projects done over years, or programs that are themselves collections of projects. Evaluation of research portfolios must not only overcome the challenges inherent in evaluating individual projects but also consider how the projects in the portfolio relate to each other and what synergies might accrue from multiple projects.
Each type of portfolio-level metrics offers advantages and disadvantages.
The former Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury and its successors, now subsumed by the Defense Health Agency, asked the RAND Corporation to help it understand how others evaluate the performance of research portfolios. To answer this question, the RAND team reviewed the research practices of 34 prominent federal and private agencies and organizations known for development, execution, and evaluation of portfolios of a variety of types of research: basic, applied, and translational. This work consisted of a literature and document review and a series of interviews with representatives of selected research-funding institutions. The team used the collected data to develop a taxonomy of assessment metrics, organized according to the individual stages of a generic logic model (inputs, processes, outputs, outcomes, and impacts).[1]
The team found that research-funding organizations generally use three types of portfolio-level metrics: (1) aggregations of project-level data, derived by adding up data from individual projects; (2) narrative assessments; and (3) general (e.g., population-level) metrics, which might depend on project-level data. Each type of portfolio-level metrics offers advantages and disadvantages (see the table below).
Type of Metric | Advantage | Disadvantage |
---|---|---|
Aggregations of project-level data | Easy to compile, communicate, and compare | Lacks nuance and provides a limited ability to capture the portfolio's added value |
Narrative assessment | Good at addressing attribution issues | Can be costly to produce and hard to compare |
General (e.g., population-level) metrics | Uses data that are typically available, comparable, and understandable | Contains possibly weak links to research |
Because research funders have different interests and given the trade-offs associated with the use of various metrics, there is no single ideal approach or set of metrics. Instead, a mixed-methods approach that considers the context and needs of the research agenda is likely to offer the best balance. Nevertheless, informed by the taxonomy of metrics, as well as further insights from reviewed literature and interview testimonies, the research team formulated a series of broadly applicable high-level recommendations (see the table above). They can assist other U.S. Department of Defense (DoD) entities concerned with support for the health, well-being, and readiness of warfighters and sharing a comparatively applied and translational focus for their research. The taxonomy of metrics is also relevant to any funder of research portfolios. The findings and recommendations are intended to assist anyone wanting to deploy an effective framework for assessing the performance of a research portfolio while considering the specific needs of each agency or organization.
Photo: Getty Images/MarcoMarchi
Photo: Getty Images/Audtakorn Sutarmjam/EyeEm
This report is part of the RAND Corporation Research brief series. RAND research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.