Examining Design and Statistical Power for Planning Cluster Randomized Trials Aimed at Improving Student Science Achievement and Science Teacher Outcomes
Published in: AERA Open, Volume 6, Issue 3, pages 1–12 (July 2020). doi: 10.1177/2332858420939526
Posted on RAND.org on November 25, 2020
With the increasing demand for evidence-based research on teacher effectiveness and improving student achievement, more impact studies are being conducted to examine the effectiveness of professional development (PD) interventions. Cluster randomized trials (CRTs) are often carried out to assess PD interventions that aim to improve both teacher and student outcomes. Due to the different design parameters (i.e., intraclass correlation and R2) and benchmark effect sizes associated with the student and teacher outcomes, two power analyses are necessary for planning CRTs that aim to detect both teacher and student effects in one study. These two power analyses are often conducted separately without considering how design choices to power the study to detect student effects may affect design choices to power the study to detect teacher effects and vice versa. In this study, we consider strategies to maximize the efficiency of the study design when both student and teacher effects are of primary interest.