Although rates of children's exposure to violence have been declining, in 2011, 58 percent of children in a nationally representative sample had been exposed to violence in the past year, with 48 percent exposed to multiple types of violence (Finkelhor, Shattuck, et al., 2014; Finkelhor, Turner, et al., 2015). The immediate negative consequences of children's exposure to violence include depression, anxiety, behavior problems, and trauma symptoms, and many of these issues persist into adulthood. Many of the interventions aimed at reducing the negative consequences of children's exposure to violence have focused on a specific type of violence exposure, intervention setting, or symptom profile. Some of these have been proven effective, such as treatments for children with posttraumatic stress (Cohen, Mannarino, and Deblinger, 2006; Lieberman, Van Horn, and Ippen, 2005). On the other hand, targeted interventions have a very limited evidence base, and prevention efforts are largely untested. As a result, the evidence base is still emerging for behavioral health programs that ameliorate the adverse effects that exposure to violence can have on children. Further, there have been challenges implementing promising or proven interventions in real-world settings. Overall, there is a need to build the evidence base for interventions that can both improve outcomes for children and be effectively delivered in community-based settings.
This article presents the results of experimental and quasi-experimental studies conducted in ten different communities that sought to improve outcomes for children exposed to violence (CEV). In 2000, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) developed the Safe Start Initiative to develop better programs and practices for CEV and to demonstrate that such programs can work in community settings. The first (demonstration) phase of the Safe Start Initiative, completed in 2006, involved demonstrations of promising practices in the system of care to address children's exposure to violence. For the second (implementation) phase, called Safe Start Promising Approaches (SSPA), OJJDP selected 15 sites in 2005 to implement promising interventions designed to reduce and prevent the harmful effects of children's exposure to violence. RAND served as the national evaluator in this effort and produced reports on both process (Schultz et al., 2010) and outcomes (Jaycox et al., 2011). The second phase continued in 2010 when OJJDP selected an additional ten sites and funded RAND to conduct a national evaluation on outcomes. The ten program sites varied by community size, location, age range served, and types of violence exposure, with each proposing an intervention to fit the needs of its target population.
We designed the overall evaluation to examine whether the implementation of the Safe Start programs resulted in individual-level improvements in specific outcome domains at a particular site. The evaluation's intent-to-treat analysis approach involved analyses of all those who were offered participation in Safe Start, regardless of how much of the program they actually received. Thus, we designed it to determine what types of outcomes can be expected if the intervention is used in a similar community under similar conditions.
To prepare for program implementation and evaluation, each site worked with the national evaluation team to complete a so-called Green Light process to develop specific plans for implementation and ensure readiness for evaluation. The Green Light process included a power analysis and culminated in a rigorous evaluation design at each site. Because one site tested two interventions, there were 11 separate studies across the ten sites. Seven sites conducted randomized controlled trials (RCTs), with two of these using a wait-list comparison design. The other three sites had quasi-experimental designs with comparison groups formed within the Safe Start agency or community. For all enrolled families, we collected standardized, age-appropriate measures at baseline and six and 12 months after enrollment. We selected the measures to document child, caregiver, and family outcomes in the domains of posttraumatic stress, depression, behavior problems, social–emotional competence, school behavior and attitudes, family functioning, violence exposure, and caregiver mental health. Each site identified one primary outcome and then prioritized the other outcomes as either secondary or tertiary depending on the goals of its specific intervention. We provided initial training and ongoing support for data collection. All data were submitted electronically to RAND for processing and analysis.
We analyzed each site's data separately; each data set included descriptive analyses of the sample characteristics at baseline and each follow-up, differences between the groups at baseline and each follow-up for each outcome measure, a description of the Safe Start services that the intervention group received, differences within each group over time for each outcome measure, and intervention effects over time that compared the mean changes of the two groups. When sample sizes allowed, we examined outcomes within different levels of dosages of the intervention.
Table 1 describes the interventions, expected effect sizes, target sample sizes required for 80-percent statistical power to detect the expected effect sizes, and the actual enrollment for each of the 11 studies. For each study, actual enrollment and retention affected our ability to draw conclusions about the effectiveness of the interventions.
Table 1. Program Study Characteristics and Evaluation Designs
|Study Site||Intervention Component and Design||Primary Outcome||Expected Effect Size||Target Enrollment for 80% Power||Actual Enrollment|
|Aurora, Colo.||RCT: Strategic enhancement to an intensive dyadic therapy (Trauma-Focused Cognitive–Behavioral Therapy + Let's Connect) compared with Trauma-Focused Cognitive–Behavioral Therapy alone||Child PTSD||Small||729||235|
|Denver, Colo.||RCT: Law Enforcement Advocate + group therapy model (Strengthening Family Coping Resources) compared with usual probation services while on a waiting list||Positive involvement||Medium||250||136|
|Detroit, Mich.||RCT: Group therapy (SFP + Psychological First Aid) + case management compared with family nutrition groups and case management||Family conflict||Medium||250||403|
|El Paso, Texas||RCT: Group therapy (culturally modified version of SFP [Dando Fuerza a la Familia]) + case management compared with case management alone||Child self-control||Medium||250||486|
|Honolulu, Hawaii*||Quasi-experimental: Enhancement to an existing group therapy (Haupoa) + individualized clinical child assessment + individual and family therapy (Modular Cognitive–Behavioral Therapy) compared with usual services||Child total behavior problems||Medium||418||129|
|Kalamazoo, Mich.*||Quasi-experimental: Adaptation to an existing group therapy (Psychological First Aid) compared with usual community services||Positive involvement||Small||1,065||412|
|Philadelphia, Pa.||RCT: Individual home-based therapy (Safety, Emotions, Loss, and Future) + Early Head Start services compared with Early Head Start alone||Caregiver depression||Small||638||233|
|Queens, N.Y.*||RCT: Intensive dyadic therapy (Alternatives for Families: A Cognitive Behavioral Therapy) compared with waiting list||Child PTSD||Medium||250||99|
|Spokane, Wash., ARC||RCT: Individual therapy (ARC model) within Head Start compared with Head Start alone||Child cooperation, assertion, self-control||Small||638||198|
|Spokane, Wash., COS||RCT: Group and individual therapy (COS) within Head Start compared with Head Start alone||Child cooperation, assertion, self-control||Small||638||201|
|Worcester, Mass.||Quasi-experimental: Child assessments and service plans + group therapy (Strengthening Family Coping Resources) within homeless shelter compared with usual shelter services alone||Child social–emotional competence, assertion, self-control||Medium||262||345|
NOTE: PTSD = posttraumatic stress disorder. SFP = Strengthening Families Program. ARC = Attachment, Self-Regulation, and Competency. COS = Circle of Security.
*Because of implementation challenges, we did not include this site in the analysis.
Across all studies, enrollment in the Safe Start intervention groups totaled approximately 1,500 families, with an additional 1,250 families enrolled in the comparison groups. Enrollment targets were met with mixed results. Four of the seven studies that expected their interventions to have medium intervention effects enrolled more families than the target needed for 80-percent power, while all of the studies that expected small intervention effects enrolled far fewer families than needed for 80-percent power to be able to detect the expected small effect. Because three studies had implementation problems that caused them to discontinue their participation in the national evaluation early, the rest of the analyses focused on the eight studies that finished.
We factored attrition from the study into the power calculations shown in Table 1, with most sites assuming and targeting 80-percent retention rates. In the end, two possible factors affected actual sample sizes used in the final outcome models: not reaching the target enrollment and not reaching the target retention rate. In fact, four of the studies reached or came very close to the target 80-percent retention rate at six months, but not all of these had reached their enrollment targets.
Summary of Power Analysis and Design Issues
With the final sample sizes and ability to complete the study, we can categorize the studies into three groups.
- The first group consists of the four studies that were fully powered to examine the intervention's effectiveness according to their original expectations regarding the effect size the intervention would likely have on outcomes (Aurora for intervention retention, Detroit, El Paso, and Worcester). With retained samples well over the 200 needed, Detroit and El Paso were fully powered to detect the medium intervention effect we anticipated in the design phase of the study. Worcester, which had a quasi-experimental design, retained just enough families at six months to be fully powered to detect the expected medium effect. In addition, one site, Aurora, was powered to detect medium intervention effects for retention in intervention. As a result, we focused our discussion of findings on the four studies fully powered to detect the expected effects.
- The second group consists of the five studies (Aurora for child outcomes, Denver, Philadelphia, Spokane ARC, and Spokane COS) that were underpowered for the evaluation.
- The third group consists of the three studies that could not complete their studies as planned because of implementation challenges (Honolulu, Kalamazoo, and Queens). We do not include the results for these studies in this article.
Results for the Four Powered Studies
Among the studies powered to detect medium intervention effects for child outcomes, Detroit and El Paso had within-group changes in the outcome variables with the intervention group showing improvement in symptoms, behaviors, or violence exposure over time. However, because the comparison group also improved, we found no evidence of significant differences observed in most cases between the intervention and comparison groups using difference-in-differences models. One exception was in El Paso's primary outcome, child self-control, which showed a marginally significant intervention effect with the intervention group improving more than the comparison group. For Worcester, the third study powered to detect a medium intervention effect on child outcomes, we noted changes on some measures but no clear pattern of improvement or worsening across the outcomes. Aurora was powered to detect medium intervention effects for its evaluation of retention in intervention (number of sessions attended), but there were no differences between groups on the number of intervention sessions that families received. For the powered sites, we also looked at service dosage to determine whether we could observe intervention effects within the higher-dosage groups but found no evidence that families who received high, medium, or low dosage of the intervention fared differently from comparable families in a matched comparison group.
There are several possible explanations for the lack of evidence of intervention effects in the adequately powered studies. First, the robust nature of the case management or family support groups that comparison group families in Detroit and El Paso received might have made it difficult to observe an intervention effect because the comparison groups also improved. Second, the overall dosage of the services for intervention group families might not have been enough to produce the expected outcomes. In addition, we did not collect information on fidelity of the intervention delivered. Program services as they were delivered might have had small effects on outcomes that were not observable with the sample size in these studies or with the amount of time the studies lasted. That is, small effects could grow larger over time, and we could not capture that here. Also, the programs might have improved the lives of children and families in ways that we did not measure (or measured inadequately) in this study. Finally, participants in these programs varied quite a bit in terms of their baseline levels of severity—at some sites, participants all experienced problems related to trauma, but, at other sites, the participants were relatively healthy at baseline, making it difficult to demonstrate changes in outcomes over time.
Results for the Five Underpowered Studies
Among the underpowered studies, we could not observe any evidence of intervention effects either, as expected. Aurora, with its strategic enhancement of a proven intervention, saw large changes in child and family outcomes in the expected direction within the intervention group. However, because of the intensity of the services that both groups received, the intervention group did not improve more than the comparison group. The interventions in Philadelphia and Spokane operated within existing programs for families such that both the intervention and comparison groups received a robust array of usual services and we observed modest changes in both groups. Further, because of modest funding or operating within closed systems with limited capacity, not all of the sites could plan a study of the size needed. Philadelphia, Spokane, and Denver also had limited pools from which to draw study participants so were constrained in the ability to enroll enough families from the outset. In addition, for some of these studies, the baseline status of families was such that there was little room for improvement. Finally, service uptake was lower than expected for some of these studies, with a substantial portion of the intervention groups not receiving any of the intervention services.
Effect Size Changes for All Studies
When examining the effect size change (or the within-group change from baseline to the six-month follow-up) for the intervention group, we found that only Aurora produced large, significant changes within its intervention group on any of the outcomes examined. Aurora's strategic enhancement to a proven intervention model produced large significant effects on both measures of child PTSD symptoms and on total child behavior problems. Within its intervention group, El Paso produced six medium, significant effect size changes, among the outcomes examined (child PTSD, positive involvement, caregiver depression, child self-control, family conflict, and child behavior problems). In addition to its primary outcome of child self-control, El Paso's cultural adaptation of SFP and case management positively influenced measures of child PTSD, caregiver depression, family conflict, and child total behavior problems. All the other studies produced small effect size changes on outcomes within the intervention group from baseline to the follow-ups.
Between-Group Effect Sizes for All Studies
We also estimated the size of the intervention effect from baseline to the six-month follow-up after controlling for baseline characteristics. Across the sites, Aurora had a small effect for its primary outcome of caregiver report of child PTSD symptoms when comparing the intervention group change with the comparison group change. For all of the other sites, the intervention effect on the primary outcome was very close to 0. Among the other outcomes examined were two other small effects: Denver had a small intervention effect for total child behavior problems that favored the comparison group, and El Paso had a small intervention effect for child self-report of PTSD symptoms that also favored the comparison group.
Conclusion and Next Steps
The OJJDP SSPA initiative sought to improve the evidence base for interventions for CEV as delivered in community settings. This important work aimed to fill a gap in knowledge about how to best improve outcomes for these vulnerable children and families. The SSPA sites were diverse in terms of the interventions being delivered, the settings in which they were offered, and the groups of children and families targeted for the interventions.
Although all the sites were able to plan and launch their programs and studies, the actual implementation of the SSPA interventions provides some important insights about delivering behavioral health and supportive services in real-world settings in the areas of program recruitment and retention, family readiness for and engagement in services, and service delivery. In particular, families did not take up the services fully, with many families receiving fewer services than planned. The implementation challenges and successes, outlined in our program descriptions in the appendixes, offer important information for future implementation of these types of programs.
The national evaluation of this initiative brought rigorous experimental and quasi-experimental studies to each of the ten sites (11 studies). This rigorous design also brought many challenges, including the applicability of outcome measures to sites, designs of studies that ensured acceptability and ethical handling of vulnerable children and families, limitations on funding or capacity in order to implement a fully powered study, and research procedures that added burden on staff and participants. Given these issues, we believe that other designs should also be considered in future research, including observational and multiple baseline studies that would ease the burden on community sites and allow them to focus on intervention delivery. Despite these drawbacks, the Safe Start initiative increased local capacity for delivering behavioral health programs in community-based settings, including training of staff, screening for trauma exposure, and connection with other community partners. Overall, changes in child and family outcomes were in the expected, positive direction among those who received Safe Start services, although the changes were small for most sites and there was no evidence of a difference in change for the comparison groups, which also generally improved. In addition, families reported high levels of satisfaction with the interventions offered. Although it appears that all of the programs are helping children and families and that both groups are improving over time, we do not yet have enough evidence to indicate which programs work best.
From a public health perspective, improving outcomes for CEV should include universal, selective, and targeted prevention approaches. Targeted services would be used for the minority of children who have prolonged adjustment problems related to violence exposure. There is a growing evidence base about what works in terms of the more-intensive services for children with PTSD (Foa, Keane, et al., 2008), depression (Michael and Crowley, 2002), and substance abuse problems (Tevyaw and Monti, 2004). However, we learned in Safe Start that a strategic enhancement to improve intervention retention did not necessarily improve retention over a proven intervention and that the effects expected from evidence-based interventions are not always actualized in community settings. Although there are proven and promising approaches to intervention, more work is needed to see how these interventions can be delivered effectively in real-world settings.
Several of the sites included in this initiative utilized selective prevention services for families who had been identified as exposed to violence but who were experiencing only mild or moderate symptoms. In two of the sites with adequate power to detect medium-sized improvements, both intervention and comparison families improved over time. This finding is important because it shows that supportive social services might be helpful to families, regardless of the intensity and type of services. Future exploration of services at this level might try to pin down the necessary ingredients for these less intensive, community-based approaches aimed at relatively healthy families and children. These children and families might already be on the path to recovery, bolstered by their individual or family protective factors that have helped them be resilient in the face of adversity (Gewirtz and Edleson, 2007; O'Donnell, Schwab-Stone, and Muyeed, 2002). Possible future directions include such approaches as taking a watch-and-wait tack to support families as they adjust and recover from violence exposure, and then providing or referring to specific high-quality services as specific needs are uncovered. Development of a triage system to identify what intensity level of services are needed could also be fruitful, as would be offering a menu of services and supports to meet families' current needs and being flexible to move families between levels of care as needed.
Finally, the universal prevention part of the public health triangle is insufficiently studied and was not part of the Safe Start Initiative. Violence prevention efforts have focused on reducing violence itself (Mercy et al., 1993), but little work has been done to prepare families and communities for recovery from violence when it occurs. National movements toward developing approaches in communities and settings (such as schools) that take into account trauma and its effects (trauma-informed communities) (Chafouleas et al., 2016) are gaining momentum but have not been evaluated to see whether they do, in fact, produce a more resilient child, family, or community. Thus, this area is ripe for additional exploration, particularly when combined with the full array of services within a public health model. Clearly there is a need for continued development and research across multiple levels and settings for interventions for CEV, but the challenge remains to find the key ways in which to do this that are effective, acceptable, and feasible.