Using Outcomes to Assess Teen Substance-Use Treatment Programs—How Feasible?

Andrew R. Morral, Daniel F. McCaffrey, Greg Ridgeway, Arnab Mukherji, Christopher Beighley

Research SummaryPublished Jul 11, 2007

Abstract

This study explored the feasibility of using outcome data to measure the performance of adolescent substance abuse treatment programs. The results indicate that this approach is problematic. There were few significant differences between 10 model programs and a set of comparison programs. The analysis also identified barriers to valid performance measurement that raise questions about the practicality of using outcome data to assess substance abuse treatment programs. The authors conclude that a more promising approach may be to identify quality-of-care indicators to assess program performance.

Approximately 150,000 people under the age of 18 enter substance abuse treatment programs each year in the United States. Parents, purchasers, and other stakeholders therefore have a strong interest in knowing which adolescent substance abuse treatment services are most effective at reducing substance abuse and other problems. Currently, however, no such information is available about the most commonly used programs. Similarly, there is little rigorous evidence about what might constitute best practice in the treatment of adolescent substance abusers. Thus, unlike other areas of the health care system, in which quality of care and quality improvement over time are measured using indicators such as the proportion of cases receiving accepted standards of care, adolescent substance abuse treatment lacks a quality-of-care framework.

In the absence of quality-of-care indicators, U.S. federal and state agencies are now exploring the use of program outcomes to measure the performance of treatment programs. A team of RAND researchers examined the feasibility of this approach. The study drew on the largest, most complete longitudinal data set collected on youths receiving substance abuse treatment in the United States. The analysts focused on 10 model programs. They chose a mix of three types: three long-term residential (LTR), four short-term residential (STR), and three outpatient (OP) programs. The team estimated each program's relative treatment effects on six outcome measures assessed 12 months after treatment entry: (1) recovery, indicating that a youth is free in the community and not using drugs; (2) substance abuse problems; (3) substance use frequency; (4) illegal activities; (5) emotional problems; and (6) days in a controlled environment (such as detention or residential treatment).

Model Program Outcomes, Adjusted for Patient Differences

Outcome Measured Long-term residential Short-term residential Outpatient program
A B C D E F G H I J
Drug problems - - better outcome - - - - - - -
Drug use frequency - - better outcome - - - - - - -
Illegal acts - - - - - - - - - -
Emotional problems worse outcome - better outcome - better outcome - - worse outcome - -
Recovery - - - - worse outcome - worse outcome worse outcome - -
Time in controlled environment better outcome - worse outcome - - - - - - better outcome

↑ Significantly better outcome than other programs; ↓ Significantly worse outcome than other programs; - No significant change relative to other programs.

Program Effects Were Small

Researchers found few significant differences between the model programs and the comparison programs. As shown in the figure, of 60 possible comparisons (10 programs times six outcomes), only 12 results were statistically significant, and six of these were negative—in other words, the comparison program outperformed the model one. This lack of positive treatment effects suggests that, after controlling for pretreatment patient differences, the 12-month outcomes for the model and comparison programs were virtually indistinguishable.

The result showing that model programs had worse outcomes than the comparison programs had in six instances is surprising but requires some caveats. First, half of these effects concern a single outcome variable: emotional problems. The relationship of this outcome to other traditional substance abuse treatment outcomes is not clear. Perhaps some emotional distress might be expected as a result of abstinence from the substances previously used to medicate distress. Similarly, a fourth, apparently negative effect is the finding that program C was associated with increased days in a controlled environment at follow-up. However, in the context of an LTR program designed to treat youth for a year or more, this apparently inferior outcome might actually be due to the program's relative success retaining youth in the controlled environment of treatment, a seemingly beneficial effect.

Limitations of an Outcome-Based Assessment Approach

The results highlight a number of challenges to outcome-based assessment of youth substance abuse treatment programs. They also suggest strategies for understanding treatment effects among individual programs, types of programs, or geographic regions. The lessons for outcome-based assessment include the following:

Case mix adjustment is important. The risk profiles of patients in different programs vary. As a result, especially good or bad patient outcomes from a treatment program may reveal more about the risk profiles of its patients than about the program's performance. The relative effectiveness of different programs or groups of programs can be established only by making apples-to-apples comparisons—in this case, comparing similar patients. Likewise, if a program's patient population changes over time, then comparisons of outcomes over time may not correspond to changes in that program's effectiveness. Finally, if risk profiles of patients differ by state, comparisons of state treatment outcomes that fail to adjust for these differences are likely to be misleading.

Case mix adjustment by itself is not enough. Even after adjusting for case mix, it is still difficult to draw valid conclusions about the effectiveness of different treatment programs from outcome data. For instance, problems for interpretation arise when the proportion of cases providing follow-up data differs across programs. Thus, among the 10 programs studied, one had substantially lower rates of follow-up than the others, but the outcomes observed for this program appeared to be especially good. If those patients observed at follow-up differed systematically from those who were not observed, the outcomes for this program might be biased upward relative to those for the other programs. Another challenge concerns high rates of institutionalization at follow-up: When large proportions of cases are in controlled environments, such as prisons, many outcome measures are difficult to interpret. Thus, for instance, low levels of crime, drug use, and drug problems would ordinarily be viewed as a positive outcome, but not if they reflect high rates of incarceration for a program's clients. Moreover, if programs in different geographic regions are compared, differences in institutionalization rates cannot necessarily be attributed to treatment program effects. Instead, they might reflect differences in regional laws or law enforcement, the availability of inpatient services, or other community differences. Indeed, community differences in resources, available support, opportunities, drug problems, and other characteristics may all influence the outcomes of youth in those communities, confounding efforts to isolate effects specific to treatment programs on the basis of client outcomes. These considerations suggest that, even with case-mix adjustment, valid conclusions about program performance may be difficult to derive from large-scale outcome-monitoring systems.

Program effects may inevitably be small. Small differences in program effects generally mean that, to detect performance differences, the outcomes of more cases must be assessed before sufficient data are available to draw conclusions. However, most substance abuse treatment programs see only small numbers of cases (fewer than 100 per year) and therefore may not provide adequate data for conclusive evaluation.

Outcome-based assessment may be impractical. Many of the challenges described here raise important questions about the feasibility of producing valid treatment performance information from large-scale outcome-based performance measurement efforts. This suggests that a more fruitful approach to performance measurement might be to invest more effort into identifying quality-of-care indicators for adolescent substance abuse treatment programs.

Available for Download

Topics

Document Details

Citation

RAND Style Manual
Morral, Andrew R., Daniel F. McCaffrey, Greg Ridgeway, Arnab Mukherji, and Christopher Beighley, Using Outcomes to Assess Teen Substance-Use Treatment Programs—How Feasible? RAND Corporation, RB-9269-CSAT, 2007. As of October 7, 2024: https://www.rand.org/pubs/research_briefs/RB9269.html
Chicago Manual of Style
Morral, Andrew R., Daniel F. McCaffrey, Greg Ridgeway, Arnab Mukherji, and Christopher Beighley, Using Outcomes to Assess Teen Substance-Use Treatment Programs—How Feasible? Santa Monica, CA: RAND Corporation, 2007. https://www.rand.org/pubs/research_briefs/RB9269.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.