Download

Download eBook for Free

FormatFile SizeNotes
PDF file 1.1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback124 pages $34.00 $27.20 20% Web Discount

Research Question

  1. What methods and metrics do organizations use to assess research performance at the level of the research portfolio?

The effectiveness of research, like that of other activities, can be evaluated at different levels — the individual project, a group of projects or program, or a larger grouping that might include multiple programs (a portfolio). Focusing on options for research portfolio evaluation, RAND Corporation researchers found many metrics in use or recommended for federal agencies and private, research-supporting organizations and organized them in a taxonomy. This report presents the characteristics and utility of these metrics, organized by individual stages in a logic-model framework, mapping portfolio metrics to the upstream stages of inputs, processes, and outputs and the downstream stages of outcomes and impacts. At each stage, categories of metrics are composed of sets of metric types, each of which is, in turn, composed of individual metrics. In addition to developing this taxonomy, the authors appraised key attributes of portfolio evaluation metrics and described the trade-offs associated with their use. This structured, annotated compilation can help the Defense Health Agency and other entities that evaluate research portfolios to select, develop, or revise the metrics they use.

Key Findings

Broadly speaking, research organizations use three types of portfolio-level metrics

  • Those are (1) aggregations of project-level data, derived by adding up data from individual projects; (2) narrative assessments; and (3) general (e.g., population-level) metrics, which might or might not depend on project-level data.

Nonmilitary organizations appear to be more focused than other organizations are on the measurement of research results beyond output measures

  • Entities outside the U.S. Department of Defense (DoD) expended more effort than those inside DoD on measuring outputs and the ultimate consequences of portfolio investments — outcomes and impacts — than on the inputs and processes, which are perhaps more easily measured.

There is no one-size-fits-all approach to portfolio assessment

  • Noteworthy innovative work in this area is taking place across research organizations.
  • Evaluation at the portfolio level faces challenges similar to those at the project level, including research time lags, attribution and contribution issues, and data-collection burden. In addition, portfolio-level assessments need to contend with the challenge of portfolio heterogeneity.
  • There are trade-offs associated with an organization's use of various assessment metrics, constructing an appropriately balanced mix of metrics needs to take into account organizational context, resources, and objectives for the evaluation of its research.

Recommendations

  • The organization now performing the functions and responsibilities formerly of the now-dissolved Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCoE) should review the value of currently collected data on upstream metrics (inputs and processes).
  • That organization should identify opportunities for streamlining reporting requirements and activities.
  • It should incorporate outcome and impact measurements in tracking and assessment processes.
  • It should consider developing outcome and impact tracking and measurement in an incremental fashion.
  • It should construct a balanced mix of metrics and determine how underlying data will be collected.

Table of Contents

  • Chapter One

    Introduction

  • Chapter Two

    General Findings and Considerations

  • Chapter Three

    Overview of Identified Metrics

  • Chapter Four

    Conclusions for the Psychological Health Center of Excellence, the Defense and Veterans Brain Injury Center, and the Defense Health Agency

  • Appendix A

    Variety in Evaluation Frameworks and Tools

  • Appendix B

    Suggested Prioritization of Metrics

  • Appendix C

    Additional Data on Metrics

  • Appendix D

    Stakeholder Interview Topic Guide

  • Appendix E

    Stakeholders Consulted for the Study

  • Appendix F

    The Research Portfolio Management Data Dictionary of the Former Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury

This research was sponsored by what was then known as the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCoE) and conducted within the Forces and Resources Policy Center of the RAND National Defense Research Institute, a federally funded research and development center sponsored by the Office of the Secretary of Defense, the Joint Staff, the Unified Combatant Commands, the Navy, the Marine Corps, the defense agencies, and the defense Intelligence Community.

This report is part of the RAND Corporation research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.