Controlling for Student Heterogeneity in Longitudinal Achievement Models

by J. R. Lockwood, Daniel F. McCaffrey

Download eBook for Free

FormatFile SizeNotes
PDF file 0.5 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research and policies concerning primary and secondary school education in the United States are increasingly focused on student achievement test scores, in part because of the rapidly growing availability of data tracking student scores over time. These longitudinal data are currently being used to measure school and teacher performance, as well as to study the impacts of teacher qualifications, teaching practices, school choice, school reform, charter schools and other educational interventions. Longitudinal data are highly valued because they offer analysts possible controls for unmeasured student heterogeneity in test scores that might otherwise bias results. Two approaches are widely used to control for this student heterogeneity: fixed effects models, which condition on the means of the individual students and use ordinary least squares to estimate model parameters; and random effects or mixed models, which treat student heterogeneity as part of the model error term and use generalized least squares for estimation. The usual criticism of the mixed model approach is that correlation between the unobserved student effects and other educational variables in the model can lead to biased and inconsistent parameter estimates, whereas under the same assumptions, the fixed effects approach does not suffer these shortcomings. This paper examines this criticism in the context of longitudinal student achievement data, where the complexities of standardized test scores may create conditions leading to bias in fixed effect estimators. The authors show that under a general model for student heterogeneity in observed test scores, the mixed model approach can have a certain “bias compression” property that can effectively safeguard against bias due to uncontrolled student heterogeneity, even in cases in which fixed effects models may lead to inconsistent estimates. They present several examples with simulated data to investigate the practical implications of our findings for educational research and practice.

The research described in this report was conducted within RAND Education.

This report is part of the RAND Corporation Working paper series. RAND working papers are intended to share researchers' latest findings and to solicit informal peer review. They have been approved for circulation by RAND but may not have been formally edited or peer reviewed.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.