Feb 3, 2009
This research brief describes a set of tools (logic models, outcome worksheets, and outcome narratives) used by RAND researchers to help the National Institute for Occupational Safety and Health demonstrate and communicate the impact of its research in response to an external evaluation of its research programs. The tools described here potentially have wider application for research programs throughout the federal government and more broadly for research and development programs.
Publicly funded research programs face growing pressure to show how their work has made a difference. This pressure flows primarily from two sources: federal accountability mandates (such as the Government Performance and Results Act) and internal quality-improvement efforts intended to maximize program effectiveness in a time of severe fiscal constraints. This pressure presents special challenges for research programs because of the difficulty of measuring their real-world impact. To take one example, the National Institute for Occupational Safety and Health (NIOSH) is a federal research institute that conducts studies to improve the safety and health of U.S. workers. To demonstrate that its research is contributing to this goal and to identify potential areas for improvement, NIOSH recently invited the National Academies to evaluate the contributions of its research programs to reducing work-related hazardous exposures, injuries, and illnesses. NIOSH asked RAND to help it prepare for these reviews.
A team of RAND researchers worked with selected NIOSH research programs. One of RAND's primary roles was to help each program refine and apply a method for demonstrating the impact of its research. This research brief describes the set of tools used by the RAND researchers (logic models, outcome worksheets, and outcome narratives), which potentially have wider application for research programs throughout the federal government and more broadly for research and development (R&D) programs.
A major challenge in demonstrating research impact is attributing outcomes to specific research activities. The path between research activities is often diffuse and indirect. Moreover, the gold- standard method—using an experimental approach that relies on randomly selected comparison groups to determine impact—is not possible for federal programs. Thus, clearly articulating the path by which a program achieves outcomes and providing evidence along the way that demonstrates progress along this path is critical. NIOSH outcomes of interest include reduced illness, injury, and hazardous exposure in the workplace. Although it is possible to quantify these reductions and NIOSH's research outputs (in terms of, e.g., reports, demonstrations, or safety products), it is not always simple to link the research activities and outputs to desired outcomes. Logic models can help. A logic model displays the stages across which research inputs are translated into outcomes. The figure shows a notional logic model that captures these stages: inputs, activities, outputs, intermediate outcomes (involving transfer of information and products from NIOSH or other entities to end users), and end outcomes. Logic models thus provide a comprehensive view of a research program: what it does, who uses its outputs, and the expected outcomes. Logic models can also define the domain of analysis for evaluating impact. By showing the multiple contributors to any given outcome, the logic model can help define a program's sphere of influence—that is, where it can reasonably be expected to contribute.
Of course, it is not enough to show a sequence of events from research to outcomes. There must be evidence that the research is causally connected to downstream outcomes. The guidance that RAND provided to NIOSH research programs for describing the path to outcomes involved identifying changes in the areas of interest for specific programs, such as reductions in work-related hearing loss, and tracing a plausible path backward from these outcomes to precursor research. The team developed a second tool, the outcome worksheet, to support the use of this method. The outcome worksheet reverses the order of the logic-model elements and, in so doing, helped NIOSH researchers think through the causal linkages between specific outcomes and research activities, determine the data needed to provide evidence of impact (which NIOSH programs then needed to obtain), and structure the evidence in a systematic framework.
The logic models and the information from the outcome worksheets were part of a larger evidence package submitted to the reviewers. The evidence package communicates how research activities have contributed to desired societal outcomes. The reviewers are expected to use their expert judgment and knowledge of the particular research program under review to evaluate the claims in the evidence package about the role of NIOSH programs in contributing to intermediate outcomes (such as changes in workplace practices) or end outcomes (such as reductions in hazardous exposure).
Communicating to evaluators requires a clear understanding of audience and purpose. For example, there are key differences between communicating to researchers, who are interested primarily in methods and findings, and communicating to evaluators. In this case, the evaluators were expected to read evidence packages with an orientation primarily toward making a decision about claims of impact. It follows that an evidence package has a clear, practical purpose: informing the evaluators' decision about whether the information in the package has made a convincing case for the impacts it has traced back to the research program.
Thus, the team developed a third tool, the outcome narrative, to help evidence packages accomplish this practical purpose. The outcome narrative is a structured set of answers to a series of specific questions: (1) What is the major societal problem this research is intended to address? (2) What approach was used to address this issue? (3) What were the major outputs from this research area? How and to whom were the products transferred? (4) What effect did the outputs have on the broader community? (5) What are some specific research activities currently under way or in planning in response to the problem? The responses to these questions were drawn directly from the information in the outcome worksheet.
Collectively, these tools assisted in describing the program's path to outcomes, collecting the data to support each step on the path, and communicating this information to evaluators.
These tools have other uses. Logic models can support project planning and management. They provide a structure for determining whether existing goals are aligned with program operations. These goals can drive the development of measures for gauging progress toward outcomes. Outcome worksheets are useful for determining the data needed for outcome monitoring and tracking. Using these, research programs can identify which set of research activities link to outcomes and assess the extent to which transfer activities have led to intended outcomes. Finally, outcome narratives are useful for communicating impact to audiences beyond reviewers.
These tools are part of a portfolio of useful tools for mapping the causal connections between publicly funded research and its social benefits. Putting the tools described here to rigorous use can be an important step in determining the impact and relevance of federally supported research.