Hard Lessons from Educational Interventions
By Susan J. Bodilly, Brian P. Gill, Mark Berends, Sheila Nataraj Kirby, Jacob W. Dembosky, and Jonathan P. Caulkins
Susan Bodilly, Mark Berends, Sheila Nataraj Kirby, and others from RAND monitored the progress of New American Schools nationwide. Brian Gill, Jacob Dembosky, and Jonathan Caulkins evaluated the Early Childhood Initiative of Allegheny County, Penn.
Innovative attempts to improve education have flourished over the past decade. Unfortunately, many of the attempts have failed to produce all the desired results. Nevertheless, our assessments of two of the attempts have yielded important, if contrasting, lessons for policymakers.
In one attempt, chief executives from some of America's most successful businesses launched a campaign to redesign public schools nationwide. In the other attempt, community groups working with the United Way launched a campaign to improve early care and education for low-income children in and around Pittsburgh, Penn.
Both campaigns met with a mix of success and failure. In both cases, flawed "theories of action" led to some wasted effort. In general, the business-driven campaign had a poor community plan, whereas the community-driven campaign had a poor business plan. Mistaken assumptions hobbled both efforts from the outset, but both of the experiences are enormously instructive for future large-scale educational reform initiatives.
New American Schools
About a decade ago, a private nonprofit corporation called New American Schools (NAS) set out to reverse the perceived lagging performance of American students and the lackluster results of school reform efforts. Funded largely by private-sector donations, NAS was formed in 1991 in conjunction with the America 2000 initiative of former President Bush.
The mission of NAS was to help schools and districts raise student achievement levels by implementing "whole-school designs." NAS founders believed that, in the past, many school reforms were "programmatic," or focused only on a particular set of individuals, a particular subject, or a particular grade level. This approach of adopting multiple and unconnected programmatic reforms allegedly resulted in a fragmented curriculum, a balkanized school organization, and low performance by students. Instead, the NAS founders believed that high-quality schools possessed a "unifying design" that integrated school practices into a coherent and mutually reinforcing set of effective approaches to teaching and learning for the entire school. Under these conditions, the staff could presumably function to the best of their abilities.
This "theory of action" was compelling. The critical assumption, of course, was that the coherent, focused, and sustained implementation of key design features—such as professional development, curriculum and instructional materials, content and performance standards, regular assessments, organization and governance, and parent and community involvement—would improve school and classroom environments and thereby raise student achievement.
|Kevin Sved and Jonathan Williams look over the work of students at the Accelerated School, a public charter school in South Central Los Angeles that uses a comprehensive reform design endorsed by New American Schools. Teachers introduce slow learners to the same material as gifted students, while school officials work closely with parents.|
During the first year of the initiative, NAS held a competition and made awards to 11 design teams that were then commissioned to work with schools. At the schools, implementation consisted of putting into practice the core features of a whole-school design as developed by the NAS-affiliated design teams.
These efforts were expected to have measurable effects on student achievement. The private-sector sponsors wanted results in the form of many schools adopting the designs and showing improved student performance within five years. After five years, NAS was supposed to go out of business. NAS planned for a development phase (1992-1993), a demonstration phase (1993-1995), and a scale-up phase (1995-1998). Since then, NAS has continued to exist, but in a very changed form and with a changed purpose.
During the different phases, the board enjoined NAS staff to delete design teams that could not deliver on the promises of their designs, that had a limited potential market, or that could not show an ability to become financially independent from NAS. NAS's board made clear that it was not interested in supporting a group of financially dependent design teams. The board insisted that the teams would soon move to a fee-for-service arrangement.
Over time, several teams were dropped from the NAS initiative. For example, NAS refused to support designs that were not transferable to schools across the country. NAS also refused to support design teams that were led by a central district office or a particular state. Only teams that were external to a local governance structure and that were serious about scale-up outside a "home" district or locality were acceptable.
During the scale-up phase (in which only seven teams remained), NAS used a district-level strategy by partnering with Cincinnati, Ohio; Dade County, Fla.; Kentucky; Maryland; Memphis, Tenn.; Philadelphia, Penn.; Pittsburgh, Penn.; San Antonio, Tex.; San Diego, Calif.; and three districts in Washington State. By 1995, about 185 schools in these jurisdictions had implemented NAS designs, while NAS design teams had spread to over 550 schools nationwide. By 1999, the teams were partnering with over 1,000 schools across the country.
Near the end of its originally planned life, NAS argued successfully for a new federal program to fund schools to adopt NAS-like designs. In 1997, the Comprehensive School Reform Demonstration Program was created to provide schools with funding to implement designs similar to those created and developed by NAS. Federal funding allowed the NAS designs to spread to over 4,000 schools by 2001.
To prove that its approach was efficacious, NAS asked RAND to assess (1) whether the whole-school designs could be developed, (2) whether they could be implemented, and (3) whether they could lead to improved student outcomes. We summarize the results of the RAND studies below.
Developing the Designs. While the designs were developing, they kept adapting. Positive adaptations included an increase in the amount of "design-based assistance" offered to schools by the NAS design teams, continued refinement of curricular units, and the development of protocols to help schools choose their designs. Other adaptations became problematic. Adapting to district and school policies led some designs to accept unaligned and incoherent mixes of standards, assessments, curricula, instruction, and professional development.
|Students at the Accelerated School work on creative movement during a music and dance class. The school emphasizes art, poetry, and yoga along with arithmetic and grammar to fully engage the mind and body. Time magazine named the school as the country's top elementary school of the 2000-2001 school year.|
Because of the latter adaptations, a major component of NAS's theory of action—a coherent, unifying design—was often missing or was constantly in the process of being revised. It cannot be emphasized enough that during the entire time of the RAND studies, designs were still in a state of development.
Implementing the Designs. During the scale-up phase, NAS and the design teams partnered with schools and districts that were characterized by a host of problems related to poverty, low achievement, and other challenges. Two years into implementation, only about half of the sample schools in our case studies were implementing the designs at a level consistent with expectations. The other half were below this level. All schools reported additional barriers to further implementation. Four years into the effort, schools on average reported only modest levels of implementation.
Many factors in the initial process of selecting a design influenced its final outcome. Clear communication and strong assistance by design teams fostered stronger teacher support, which led to higher levels of implementation. The schools that reported a wellinformed choice process reported higher levels of implementation than those that reported being forced to accept a design or not understanding the nature of a design. Principals also played a crucial role in ensuring a sound selection process and in gaining the initial buy-in of teachers.
School characteristics were associated with implementation. A teacher survey indicated that strong principal leadership was the single most important indicator correlated with the schoolwide level of implementation and that such leadership was also associated with reducing the variance in implementation among teachers within a school. The degree of implementation also correlated with teacher perceptions of the students and of their readiness to learn. Implementation was also higher in elementary schools than in secondary schools and in smaller schools than in larger schools.
The district context was important as well. Implementation was higher in districts that were more supportive of NAS designs and that had stable leadership, no budget crises, a coherent program of reform, resources dedicated to the NAS effort, significant school-level autonomy, and a trusting relationship among three groups: school staff, district staff, and union staff.
Meanwhile, many other reforms took place in the schools concurrently, often causing teacher overload and reducing their capacity to implement the NAS designs. A classroom study in San Antonio revealed that the adoption of multiple reforms at the district level easily overwhelmed teachers and their efforts at this particular reform. Most important, the high-stakes state testing regime, which encouraged a focus on basic skills, resulted in a district adoption of curricular programs in addition to the NAS designs. In fact, the curricular programs associated with the state testing regime conflicted with the NAS designs and resulted in lower levels of implementation.
Lack of funding was the single most important reason cited by most schools in the decision to drop a design. Significant unfunded costs were borne by teachers in terms of their time and effort involved.
Improving Student Outcomes. Of the 163 NAS schools for which we had data to compare performance relative to their districts, 81 schools (50 percent) made gains relative to the district in mathematics, and 77 schools (47 percent) made gains in reading (see Figure 1). Among the jurisdictions with 10 or more NAS schools, Memphis and Kentucky appeared to improve the most in mathematics, while Cincinnati and Washington State appeared to do best in reading. Better and longer-term data are needed to make conclusive judgments about the effects of the designs on student performance.
Our detailed classroom study of San Antonio allowed us to examine the effects of variations in instructional conditions. We found that strong principal leadership, as reported by teachers, had significant positive effects on student test scores in reading and mathematics. We found that the instructional conditions promoted by reforms such as NAS—including teacher-reported collaboration, professional development, and revised instructional practices—were not related to student achievement net of other student and classroom conditions. We also found that early implementation of NAS designs in a high-poverty district within a high-stakes testing and accountability system did not result in significant effects on student achievement.
Lessons Learned. Our findings provide mixed evidence to support the NAS theory of action. One of the primary lessons is that the designs, by themselves, cannot transform schools. Beyond the designs, the schools also need significant amounts of professional development, technical assistance, and materials geared to the implementation of the designs.
We conclude that the NAS theory of action was largely underdeveloped. The causal chain of events leading to strong implementation and improved student outcomes proved to be far more complex than originally imagined by NAS. The chain of events also remained largely outside the control and influence of NAS. Exogenous variables included school and district leadership; demographic, financial, and other prevailing conditions; the simultaneous adoption of multiple reforms; and the sometimes conflicting implementation of high-stakes testing regimes.
Nonetheless, the experiment has produced the following important lessons for future large-scale educational reform efforts:
Externally developed interventions cannot "break the mold" and yet still be implemented in the existing contexts of schools and districts. The evidence suggests that schools were not by and large fertile ground for "break the mold" ideas, often because of a lack of capacity or because of local, state, or district regulations. The schools did not have a ready place for the designs. Instead, the designs had to adapt to school conditions or simply not be implemented.
External interventions need to address systemic issues that can hinder implementation. Systemic issues include the lack of teacher capacity, especially in terms of time and subject area expertise; the lack of strong principal leadership; and a district infrastructure at odds with the needs of a design.
A rush to scale up when interventions interventions are not completely developed weakens results. Many of the problems associated with the NAS scaleup phase were the result of pushing toward full-scale adoption before the designs were developed and before the design teams had created the capacity to support more schools.
A key component of successful implementation is consistent, clear, and frequent communication and assistance between design teams and schools, particularly teachers. A strong, trusting relationship between a school and an external agent is a prerequisite for strong implementation of complex interventions that require significant changes in behavior. If external players expect teachers to change their behavior significantly, then the external players need to invest considerable time and effort in building relationships with teachers.
Monitoring site progress is necessary if developers are to succeed and to improve their designs over time. Unless systems for tracking progress and for understanding school-level concerns are created and used for improving the intervention, the effort cannot succeed over the long term.
The typical outcome measures used in public accountability systems provide a very limited measure of student and school performance. In the sample we studied, the high-stakes testing regime precluded the adoption of the rich and varied curricula developed by design teams—curricula that could have challenged students and motivated them toward more in-depth learning experiences. The overwhelming emphasis now given to the scores on state- or district-mandated tests does not bode well for many innovative reform efforts.
Early Childhood Initiative
The Early Childhood Initiative (ECI) aimed to deliver high-quality early care and education services to lowincome children from birth through age five in Pittsburgh and the surrounding communities of Allegheny County. ECI planners hoped to prepare the children for kindergarten, to promote their long-term educational attainment, and to give them the early tools needed to grow into productive, successful members of society.
From 1996 through 2000, ECI operated under the auspices of the United Way of Allegheny County. By April 2001, the program had been scaled down, converted to a demonstration program, and placed under the management of the University of Pittsburgh.
The goals of the program had been quite ambitious. It aimed (1) to provide high-quality services, (2) to do so on a large scale (to serve 7,600 children in 80 neighborhoods within five years), (3) to do so inexpensively (at a cost of $4,000 to $5,000 per child per year), (4) to use a community-driven approach, and (5) to achieve sustainability through a commitment of state funding.
ECI succeeded in one important respect: The program generated high-quality services for hundreds of children in several communities. However, ECI failed in many other respects, despite the good intentions of everyone involved and the support of a wide array of community leaders. Four years after its launch, ECI was far short of its enrollment targets. The cost per child was much higher than anticipated. And the efforts to secure long-term state funding had failed.
There are several policy lessons. Given the goals of ECI, success required that it have a clear sense of market realities, a well-designed theory of action, an effective strategy to induce public funding, and a coherent organizational structure. Weaknesses in each of these areas undermined ECI's success in meeting its goals that pertained to scale, cost, community, and sustainability.
A Clear Sense of Market Realities. ECI's business plan made assumptions about the population to be served, the services to be delivered, and the participation of existing child-care providers. Many of the assumptions proved to be incorrect because the planners had paid insufficient attention to the demand for services, the supply of services, and the incentives to use services.
The plan failed to anticipate that parents and neighborhood agencies would gravitate toward the highest-cost service provided by ECI. The services ranged from part-day, Head Start-like enrichment and literacy programs to full-day, center-based care and education. The plan assumed that 71 percent of children would be served in low-cost, part-day programs. In fact, virtually all the children were served in full-day programs, most in new child-care centers created by ECI.
ECI underestimated the demand for the full-time programs in part because it had underestimated the proportion of eligible children whose mothers were in the workforce. In addition, a parental preference for full-day care is unsurprising if parents are given a choice of full-day or part-day care and both options are largely or entirely subsidized. Large numbers of parents—whether of low or high income, employed or not—are likely to prefer more hours of child care and education if they are offered at little or no additional cost. This is essentially the choice that ECI offered.
As a result, ECI delivered more services per child than expected and thus cost more per child than expected. In 1999, the average cost per child per year was $13,612. That sum is not dramatically different from the cost of other, high-quality services of this kind, but it is three times as high as the cost anticipated in the ECI business plan.
The plan also assumed that many existing childcare providers would be used; instead, many were left out, for several reasons. First, many existing providers operated at so low a level of quality that they were deemed incapable of providing high-quality services. Second, some community groups refused to include the existing providers. Third, some providers refused to participate, either because they considered the quality standards and monitoring process too intrusive or because they were informal, unregistered providers not wishing to become part of the formal child-care system. Therefore, ECI incurred the high costs of creating new centers.
The plan further assumed that centers that were not fully enrolled could serve children for the same cost per child as centers that were fully enrolled. This assumption might have made sense for preexisting centers, which ECI had intended to reimburse on a per-child basis. But the great majority of ECI children attended newly established child-care centers, where the operating costs were largely fixed regardless of the number of children attending. ECI centers almost never reached 100 percent of capacity. In 1999, the average enrollment was 73 percent of capacity. As a result, operating cost per child was higher than expected.
A Well-Designed Theory of Action. At the time of ECI's inception, no models existed of high-quality early childhood services delivered on a large scale through grassroots, neighborhood control. ECI therefore needed to develop its own theory of action to explain how the initiative would work. According to its theory of action, ECI would have a central administration housed within the United Way. The administration would fund, supervise, and monitor (with stringent quality standards) the lead agencies in each community (see Figure 2). Each lead agency would in turn fund and supervise the participating providers. The providers would then serve the children and their families. The central administration would also provide technical assistance to the lead agencies, which would in turn provide such assistance to the community providers.
This theory of action proved to be cumbersome and problematic. The ultimate goal of delivering good service pertained mostly to the community providers and the families, but the theory of action pertained mostly to the central administration and the neighborhood lead agencies. The theory of action put several layers of organization between the funding sources and the primary intended beneficiaries (i.e., the children and parents) and produced a number of implementation problems.
Another consequence of the multilayered theory of action was that each layer added to the administrative cost. The theory led to the imposition of substantial top-down requirements and created additional administrative structures in each neighborhood, virtually guaranteeing that the administrative costs would be high.
Figure 3 shows how each of the mistaken assumptions of the business plan contributed to the cost overruns. The first bar shows the original budget. The second bar shows that the shift to full-day care dramatically increased both the operating and the capital costs. The third bar adds the additional costs of operating new child-care centers. These two factors alone explain 65 percent of the cost overruns. The remaining 35 percent can be attributed to higher central administrative costs, neighborhood-level costs, and other capital costs.
An Effective Strategy to Induce Public Funding. ECI planned to spearhead a lobbying effort to persuade the state of Pennsylvania to fund the initiative at the end of its five-year start-up period, when the initial infusion of money from foundations and private donors would be exhausted. ECI failed in this effort, both because of the obstacles involved and because of the strategy pursued.
The obstacles were numerous. To begin with, the benefits of early childhood programs are diffuse, whereas the costs are concentrated. Even large social benefits might not be sufficient to persuade a state legislature to fund a program when many of the benefits will not accrue to the state's treasury. Most of the benefits also accrue over the long term, whereas the costs are borne immediately. The amount of funding requested by Allegheny County would inevitably raise issues of regional fairness among Pennsylvania political leaders. And voters, like policymakers, remain ambivalent about public funding of early childhood programs. Some voters and policymakers feel strongly that child care is a private responsibility of parents rather than a public responsibility of the state.
Independent of these obstacles, ECI planners failed to ensure that state policymakers had a full, substantive, and early role in the initiative's design. Even more self-defeating, the goal of ECI conflicted with the goal of Pennsylvania's system of child-care subsidies for low-income families. ECI's goal was to provide high-quality early education to low-income children—regardless of whether their parents were working. Pennsylvania's goal for child-care subsidies, in contrast, was to prod parents on public assistance into the workforce. Even if ECI's planners disagreed with Pennsylvania's goal, the planners needed to recognize that a direct conflict with the state goal would seriously undermine the likelihood of state support for ECI.
This external conflict became a serious problem for ECI as it became increasingly dependent on the state's subsidy system. The United Way tried to make ECI compatible with that system, but such efforts created major internal conflicts and undermined ECI's support in the neighborhoods.
A Coherent Organizational Structure. ECI lacked an independent board with the authority to resolve conflicts and make key decisions. A proliferation of volunteer oversight committees added to the administrative confusion and further diffused the authority.
The complex leadership structure led to several problems. Both the community planning process and the response to changing conditions were slow, because the plans and responses had to be reviewed by several layers of committees. Confusion over who had the authority to make what decisions allowed disagreements to escalate into full-blown, unresolved power struggles among managers. Communication with funding sources and business leaders broke down because managers disagreed over what they should be communicating. Neighborhood agencies and providers received mixed signals about the rules and procedures. Predictably, support for ECI declined among funding sources, business leaders, and neighborhood groups.
At the same time that it had established a hierarchical bureaucracy, ECI had aimed to permit neighborhoods to direct the local programs. But devolution of authority to the community level requires a trade-off: Neighborhood-led programs might be more robust and effective than those imposed from without, but implementation is not likely to proceed rapidly. ECI planners failed to appreciate how much time neighborhood groups would need to mobilize, to assess their needs, to find space for child-care centers, to develop proposals, and to establish programs.
ECI's business plan failed to acknowledge the extent to which quality control and community control might be in tension. ECI's insistence on its own definition of quality placed constraints on the freedom to be exercised by the communities. Some neighborhoods were disappointed when they discovered that their dreams were not always consistent with the vision of ECI.
Lessons Learned. The foregoing discussion suggests at least four lessons for future large-scale reform initiatives:
- Careful consideration of demand, supply, and the likely responses to incentives is essential to anticipating unintended consequences.
- A thoughtful and realistic theory of action promotes an initiative's goals more effectively.
- Bold visions require hardheaded plans that acknowledge the political and policy realities and that include all relevant stakeholders early in the planning process.
- An ambitious, large-scale initiative should have an independent board and a clear administrative structure that promotes strong leadership.
Whither NAS and ECI?
Across the country, the New American Schools would have had greater success if the plan had been more flexible with respect to the prevailing conditions within schools and districts, more forgiving with respect to the time lines, and more attentive to the concerns of teachers.
In Pittsburgh, the Early Childhood Initiative would have had greater success if the community plan had incorporated a clear sense of market realities, a welldesigned theory of action, a better strategy to induce public funding, and a stronger organizational structure.
Because of poor planning, the expectations regarding these reforms were not likely to be met. But the planning can be improved in the future. Both of these reforms—one targeting entire schools, the other targeting children from birth to age five—have succeeded partially despite missteps along the way. Federal and state policymakers should not abandon what could be promising vehicles for reform before giving them a fair chance to prove themselves.
Facing the Challenges of Whole-School Reform: New American Schools After a Decade, Mark Berends, Susan J. Bodilly, Sheila Nataraj Kirby, MR-1498-EDU, 2002, 266 pp., ISBN 0-8330-3133-3, $28.00.
A "Noble Bet" in Early Care and Education: Lessons from One Community's Experience, Brian P. Gill, Jacob W. Dembosky, Jonathan P. Caulkins, MR-1544-EDU, 2002, 182 pp., ISBN 0-8330-3162-7, $20.00.
A "Noble Bet" in Early Care and Education: Lessons from One Community's Experience: Executive Summary, Brian P. Gill, Jacob W. Dembosky, Jonathan P. Caulkins, MR-1544/1-EDU, 2002, 35 pp., ISBN 0-8330-3198-8, $15.00.