Download Free Electronic Document

FormatFile SizeNotes
PDF file 0.6 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Brief
Woman working with laptop and notebook

Getty Images/Caiaimage/Sam Edwards

Considerable and often costly research is conducted with support from public and private organizations on a wide range of topics, such as health care, educational approaches, and manufacturing practices. It can address the fundamental sciences or be more applied.

Ascertaining whether the research yields concrete benefits poses a complex problem. A long time can pass between when research is completed and when the effects become apparent, and linking observed results to the underlying research is often difficult. Evaluating research can become even more complex when it is part of a portfolio—a body of research consisting of many projects done over years, or programs that are themselves collections of projects. Evaluation of research portfolios must not only overcome the challenges inherent in evaluating individual projects but also consider how the projects in the portfolio relate to each other and what synergies might accrue from multiple projects.

Each type of portfolio-level metrics offers advantages and disadvantages.

The former Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury and its successors, now subsumed by the Defense Health Agency, asked the RAND Corporation to help it understand how others evaluate the performance of research portfolios. To answer this question, the RAND team reviewed the research practices of 34 prominent federal and private agencies and organizations known for development, execution, and evaluation of portfolios of a variety of types of research: basic, applied, and translational. This work consisted of a literature and document review and a series of interviews with representatives of selected research-funding institutions. The team used the collected data to develop a taxonomy of assessment metrics, organized according to the individual stages of a generic logic model (inputs, processes, outputs, outcomes, and impacts).[1]

The team found that research-funding organizations generally use three types of portfolio-level metrics: (1) aggregations of project-level data, derived by adding up data from individual projects; (2) narrative assessments; and (3) general (e.g., population-level) metrics, which might depend on project-level data. Each type of portfolio-level metrics offers advantages and disadvantages (see the table below).

Advantages and Disadvantages of Portfolio-Level Metrics

Type of Metric Advantage Disadvantage
Aggregations of project-level data Easy to compile, communicate, and compare Lacks nuance and provides a limited ability to capture the portfolio's added value
Narrative assessment Good at addressing attribution issues Can be costly to produce and hard to compare
General (e.g., population-level) metrics Uses data that are typically available, comparable, and understandable Contains possibly weak links to research

Because research funders have different interests and given the trade-offs associated with the use of various metrics, there is no single ideal approach or set of metrics. Instead, a mixed-methods approach that considers the context and needs of the research agenda is likely to offer the best balance. Nevertheless, informed by the taxonomy of metrics, as well as further insights from reviewed literature and interview testimonies, the research team formulated a series of broadly applicable high-level recommendations (see the table above). They can assist other U.S. Department of Defense (DoD) entities concerned with support for the health, well-being, and readiness of warfighters and sharing a comparatively applied and translational focus for their research. The taxonomy of metrics is also relevant to any funder of research portfolios. The findings and recommendations are intended to assist anyone wanting to deploy an effective framework for assessing the performance of a research portfolio while considering the specific needs of each agency or organization.

Charts and magnifying glass

Photo: Getty Images/MarcoMarchi

Findings

  • The scope of upstream (inputs, processes) data collection tends to be comprehensive, but frequently less is done with downstream metrics (especially outcomes and impacts), for which resources could be used more effectively.
  • Key informants expressed concerns about reporting burdens and gave positive examples of the use of central information systems.
  • Non-DoD entities appear to have been doing more than DoD entities examined to measure research portfolio outcomes and effects, and, among the DoD entities examined, an opportunity exists for more systematic measurement of research portfolio outcomes and impacts.
  • Wholesale implementation of a new impact measurement framework might not be practical, and experimentation with alternatives could be beneficial.
  • In a world of research constraints and performance-measurement demands, an opportunity exists to make explicit choices about the metrics used. This is important because metrics come with trade-offs in such areas as data availability, reliance on expert judgment, or attribution limitations.
business people

Photo: Getty Images/Audtakorn Sutarmjam/EyeEm

Recommendations

  1. Review currently collected data on upstream metrics (inputs and processes) to ascertain the continued utility of the current scope of data collection and whether the benefits of collected data outweigh the costs associated with their collection.
  2. Identify opportunities for streamlining reporting requirements and activities.
  3. Incorporate appropriate outcome and impact measurements in tracking and assessment processes.
  4. Consider developing outcome and impact tracking and measurement in an incremental fashion.
  5. Construct a balanced mix of metrics and determine how underlying data will be collected.

Notes

  • [1] This brief has been drawn from a peer reviewed report: Marjory S. Blumenthal, Jirka Taylor, Erin N. Leidy, Brent Anderson, Diana Gehlhaus Carew, John Bordeaux, and Michael G. Shanley, Research-Portfolio Performance Metrics: Rapid Review, Santa Monica, Calif.: RAND Corporation, RR-2370-OSD, 2019.

This report is part of the RAND Corporation Research brief series. RAND research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.