Benefit-Cost Analyses of Child and Family Preventive Interventions

commentary

(Social Policy Report of the Society for Research in Child Development)

U.S. President Barack Obama playing a game with children in a pre-kindergarten classroom

U.S. President Barack Obama playing a game with children in a pre-kindergarten classroom in Decatur, Georgia, U.S.

Photo by Jason Reed/Reuters

by Kimber Bogard, Lynn A. Karoly, Jeanne Brooks-Gunn

March 12, 2015

It is a pleasure to write a commentary based on the comprehensive policy article on evidence-based policy programs for children, youth, and families. Supplee and Metz make a strong case for using research evidence for program decisions at the local, state, and federal levels. The emphasis on using social science evidence for choosing programs to implement has gained momentum over the past two decades. Examples include the sets of randomized control trials conducted by the Institute of Educational Sciences in the past several administrations as well as the reliance on evidence in the Office of Management and Budget deliberations during the Obama administration (the new book by Haskins and Margolis, published by The Brookings Institution entitled Show me the evidence: Obama's fight for rigor and evidence in social policy, is a fascinating read of the this history). The Coalition of Evidence-Based Policy, directed by Jon Baron and the Washington State Institute for Public Policy (WSIPP), directed by Steve Aos, are examples of efforts being made to summarize the extant evidence on a variety of programs, in the hope that their syntheses will be used in decisions about funding (or defunding) programs.

We would like to add that in thinking about scaling programs and services for children, youth, and families, the research community should also consider economic evidence as part of their research programs in order to inform funding decisions. The Board on Children, Youth, and Families of the Institute of Medicine and National Research Council (IOM/NRC) assembled a planning committee of experts to design a workshop on the use of benefit-cost analyses as part of the evaluation of prevention programs. The workshop was held in late 2013, and a summary has been published (IOM and NRC, 2014).

Workshops of the IOM/NRC do not result in specific recommendations, unlike consensus studies. Rather the purpose is to highlight issues that might be important to consider on the specific topic under discussion. The first author of this commentary is the director of the IOM/NRC Board on Children, Youth, and Families; the second author was a member of the workshop planning committee; and the third author was the chair of the workshop planning committee.

Some of the issues considered in the workshop included:

  • What level of research rigor should be met before results from an evaluation are used to estimate or predict outcomes in a cost-benefit analyses?
  • What are best practices and methodologies for costing prevention interventions, including the assessment of full economic/opportunity costs?
  • What processes and methodologies should be used when theoretically and empirically linking prevention outcomes to avoided costs or increased revenues?
  • Over what time period should the economic benefits of prevention interventions be projected?
  • What issues arise when the results of benefit-cost analyses are applied to prevention efforts at scale?
  • Do benefit-cost results from efficacy trials need to be adjusted when prevention is taken to scale?
  • Can we define standards that all studies should meet before they can be used to inform policy and budget decisions?
  • How could research be used to create policy models that can help inform policy and budget decisions, analogous to the benefit-cost model developed by the Washington State Institute for Public Policy?

According to Supplee and Metz, the research community should include at least three specific elements in designing programs that can be scaled: interaction among multiple stakeholders, reporting detailed information on how the programs are implemented so that they can be replicated and scaled, and the need for understanding the results of the studies. Speakers at the IOM/NRC workshop also identified building political will, assessing and documenting costs and benefits of programs, and reporting results in a way that decision makers and implementers can understand and take action. In essence, the workshop brought together researchers and decision makers to highlight issues to be considered in scaling preventive interventions for children, youth, and families.

Two specific approaches, Communities that Care (CtC) and WSIPP, were presented at the workshop to address the call for including multiple stakeholders in designing and reporting on outcomes of programs designed to benefit children, youth, and families. In CtC, Margaret Kuklinski described their approach to including multiple stakeholders in designing studies and evaluations and reporting on outcomes of programs designed to benefit youth and families. CtC is a coalition-driven approach to preventive intervention that involves mayors, teachers, and parents. The coalition of members uses survey data from the community to make decisions about program selection. The coalition then monitors and evaluates outcomes to determine impact and guide any necessary course corrections in programming. Having decision makers at the table in the design phase is an important component of this approach.

Steve Aos described WSIPP as a model that presents policy options to legislators in a standardized way, which allows them to compare apples to apples of program benefits and costs with follow up discussions to translate some of the more difficult concepts such as risk and uncertainty. This is another example of a process whereby community or state level data on program implementation — both impacts and costs — are used to inform decisions about program funding and implementation. Aos indicated that it is very important to use local data that represent local conditions, and to update reports to decision makers with new data annually. Both speakers indicated that building a coalition or engaging policymakers in discussions about design and reporting builds political will to scale and sustain programs.

Workshop participants called out the importance of collecting more information about the key elements of program design and implementation and linking the science on effectiveness with funding decisions for programs and services. Panelists noted that even though cost analyses are necessary to identify the resources needed to implement programs at scale, this type of information typically takes a back seat to effectiveness and benefit analyses. An ingredients-based approach to the collection of detailed information on the resources used for program implementation serves to both document the costs of program delivery (including required infrastructure) and to provide the information needed for taking programs to scale in a sustainable way. In this way, speakers made a call to the research community to provide rigorous cost analyses, in addition to benefit and effectiveness analyses, in order to provide a full and rigorous assessment of the costs to adopt, implement, and sustain programs that can in turn inform policy decisions.

In addition, panelists discussed how to translate results to inform policy and practice. Several types of decision makers were identified who need information on benefit-cost and evidence-based approaches, including program specialists who write regulations, implement programs, and monitor progress within the executive branch of government. In communicating with decision makers, simple, evidence-based presentations are needed which can convey the strength of the available evidence, while also acknowledging areas of uncertainty. Another point raised among panelists was the importance of having evaluation and implementation program staff spending more time discussing the program and how it is delivered with decision makers.

Miscommunication between researchers and research consumers can stall or divert efforts to drive funding decisions. To address this situation, Jens Ludwig suggested that the research community set a bar on quality standards that all evaluations would meet in order to inform policy. Also, identifying mistakes made by research consumers in interpreting the science can inform how to better communicate research findings. Engaging research consumers in this process would be welcome.

In sum, the IOM/NRC workshop highlighted the powerful potential that using economic evidence to inform investments in children, youth, and families could have on improving well-being. However, all of these efforts must meet standards developed by the research community.


Lynn A. Karoly is a senior economist at the RAND Corporation and a professor at the Pardee RAND Graduate School. Kimber Bogard is the director of the Board on Children, Youth, and Families at the Institute of Medicine and National Research Council of the National Academies. Jeanne Brooks-Gunn is the Virginia and Leonard Marx Professor of Child Development at Columbia University's Teachers College and the College of Physicians and Surgeons.

This commentary originally appeared in Social Policy Report of the Society for Research in Child Development on January 1, 2015. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.