Evaluation of the Alternative Provision Specialist Taskforces (APST)
Evaluation Protocol and Statistical Analysis Plan
Published in: Youth Endowment Fund website (2023)
Posted on rand.org Jun 22, 2023
The evaluation of the Alternative Provision Specialist Taskforces (APST) programme is being delivered by a consortium from three organisations: RAND Europe, the University of Westminster (UW) and FFT Education Datalab (FFT).
The Consortium will carry out an independent evaluation that will include:
- A mixed methods process evaluation that aims to understand delivery of APST and the experiences of those involved and that provides ongoing feedback and lessons learnt across the 22 alternative provision (AP) schools implementing APST.
- An impact evaluation that aims to estimate the causal effect of APST on a range of pupil outcomes (see 126.96.36.199). As randomising AP schools or AP pupils was not feasible (since the intervention schools had already been selected by DfE), the impact analysis uses a quasi-experimental approach. Reflecting the likelihood of there being unobserved differences between participating and non-participating schools, the impact evaluation will take the form of a difference-in-differences study.
- A cost evaluation that describes the costs associated with delivery of APST at both the Department for Education (DfE) level and the school level.
Within the Consortium:
- RAND Europe holds overall responsibility for the delivery of the evaluation, including project management and ensuring that all elements of the research are integrated.
- RAND Europe leads on the formative aspect of the evaluation, process evaluation, the cost evaluation, and the primary data collection to inform the impact evaluation.
- FFT and UW lead on the quasi-experimental impact evaluation, including establishing the counterfactual, linking datasets, and all outcome analyses.
For most outcomes evaluated in the impact evaluation, data will be taken from administrative sources. By definition these sources cover all participating and non-participating schools in England.However, primary data collection to inform the impact evaluation is required to collect data on social and emotional outcomes. This is being undertaken via the Strengths and Difficulties Questionnaire (SDQ). It was necessary to recruit comparison schools for the purposes of evaluating this outcome measured by the SDQ. To do this, three matched comparison APs schools were identified for each of the 22 participating AP schools. We attempted to recruit one of the three potential comparison APs schools for each participating AP school. This was achieved in 15 cases, with reserve matches being recruited in 6 cases. This was considered the optimal balance between a sufficiently powered analysis (see Table 7) and fieldwork costs. Comparison schools have been asked to administer the SDQ to their students. Further details about the selection of comparison AP Schools are included in Appendix C.