Assessing Costs and Benefits of Early Childhood Intervention Programs: Overview and Applications to the Starting Early, Starting Smart Program
Jan 1, 2001
As they pay more attention to accountability, funders and implementers of early childhood interventions are becoming more interested in comparing the benefits their programs produce and the costs they incur. RAND has issued a volume providing general guidance for performing such analyses. The report (Assessing Costs and Benefits of Early Childhood Intervention Programs) also offers, as a case study, application of the guidance to a decision faced by the U.S. Substance Abuse and Mental Health Services Administration and the Casey Family Programs in pursuing their Starting Early Starting Smart (SESS) program. This brief summarizes that guidance.
If you are a decisionmaker beginning to think about measuring costs and benefits, you must first decide what you want to learn about the program in question. Do you want to be able to say
If you are implementing the program whose value you wish to measure, it is important to make this choice as early as possible—ideally, while elements of program planning are still under way. The reason? What you can learn from an analysis of costs and benefits could be limited by the methods used to design the program.
In particular, scientifically credible measures of benefits and costs require a comparison group. This is a group of children who are tracked along with the program participants and who are as similar as possible to the latter except that they do not participate. Care must be taken in selecting these groups. The SESS program, for example, tests the effectiveness of integrating behavioral-health services into preschool education or primary health care for young children. The comparison groups, then, would be children similar to the SESS participants and enrolled in the same kind of preschool or receiving the same kind of primary health care, but without the integrated behavioral-health services. If you do not have a comparison group, you leave yourself open to the criticism that the benefits you measured could have been realized in the absence of your program.
It will also be important to compare benefits and costs for the program you are analyzing with analogous measures for other programs. That comparison is particularly important for the third question listed above, because if you come up with, for example, a given number of points gained on an emotional-development scale per $1,000 invested, that would be meaningless unless it is compared with another program's performance.
The choice you make among the alternative questions listed above will affect the amount of money and time that must be spent on the evaluation. This is well borne out by studies already completed. The figure shows the social benefits and costs (in answer to the first question above) measured for three programs: Perry Preschool (Ypsilanti, Mich.), the Prenatal/Early Infancy Project (PEIP) (Elmira, N.Y.), and the Chicago Child-Parent Centers (CPCs). For all three of these programs, benefits to society far exceeded the costs, at least for the higher-risk elements of the served population. However, all of the evaluations tracked program participants and comparison groups for 14 years or more. Without these long follow-up periods, not all of these programs would have shown net benefits, because, although all costs have accrued once the intervention is over, benefits can continue to accumulate for years to come as the participants mature. For Perry Preschool, accumulated benefits did not catch up to program costs until the children were 20 years old.
There are ways to make the tally of benefits and costs more comprehensive without waiting such long periods:
Neither extended follow-up nor the steps needed to demonstrate net benefits in the interim are cheap. A substantial commitment is required to measure net social benefits (or benefit-cost ratios) or net savings to government. It would be less onerous to demonstrate which of the several interventions is the most beneficial along a single dimension (e.g., increased achievement test scores, decreased teen pregnancies) per dollar spent. If the resources available to you are limited, such cost-effectiveness comparisons deserve consideration. And full, accurate measures of budgetary costs alone (with no measurement of benefits) can sometimes be helpful in planning the implementation of a program in another site or at a larger scale.
Regardless of which analytical approach you take, it is important to realize that, while a simple numerical comparison (such as those in the figure) makes an attractive story, it is not the whole story. Benefits and savings may accrue to some stakeholders and not others. Various sources of uncertainty may make it difficult to predict with confidence that one program will be more cost-effective than another or that net benefits accruing from an intervention will recur when it is replicated under different circumstances.
Benefit and cost analysis is a powerful tool to assist in the understanding of the relative social worth of different programs and in choosing which one might be the better investment. However, this tool is not comprehensive or precise enough to be the final arbiter. If you are deciding among intervention alternatives, you must also bring to bear your own values and subjective judgment and those of other stakeholders in the community.