Cover: Value-Added Assessment in Practice

Value-Added Assessment in Practice

Lessons from the Pennsylvania Value-Added Assessment System Pilot Project

Published Oct 14, 2007

by Daniel F. McCaffrey, Laura S. Hamilton


Download eBook for Free

Full Document

FormatFile SizeNotes
PDF file 0.8 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Summary Only

FormatFile SizeNotes
PDF file 0.1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


FormatFile SizeNotes
PDF file 0.3 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 Format Price
Add to Cart Paperback128 pages $26.00

The No Child Left Behind Act of 2001 places a strong emphasis on the use of student achievement test scores to measure school performance, and, throughout the United States, school and district education reform efforts are increasingly focusing on the use of student achievement data to make decisions about curriculum and instruction. To encourage and facilitate data-driven decisionmaking, many states and districts have begun providing staff with information from value-added assessment (VAA) systems — collections of complex statistical techniques that use multiple years of test-score data to try to estimate the causal effects of individual schools or teachers on student learning. The authors examined Pennsylvania’s value-added assessment system, which was rolled out in four waves, allowing comparison of a subset of school districts participating in the VAA program with matched comparison districts not in the program. The study found no significant differences in student achievement between VAA and comparison districts. The authors surveyed school superintendents, principals, and teachers from these districts about their attitudes toward and use of test and value-added data for decisionmaking, and found that most educators at schools participating in the VAA program do not make significant use of the information it provides. McCaffrey and Hamilton conclude that the utility of VAA cannot be accurately assessed until educators become more engaged in using value-added measures.

The research described in this report was conducted within RAND Education, a division of the RAND Corporation. It was funded by the Carnegie Corporation of New York, the Ewing Marion Kauffman Foundation, the National Education Association, and the Pennsylvania State Education Association. Additional funding came from the Connecticut Education Association, Education Minnesota, and the Ohio Education Association.

This report is part of the RAND technical report series. RAND technical reports may include research findings on a specific topic that is limited in scope or intended for a narrow audience; present discussions of the methodology employed in research; provide literature reviews, survey instruments, modeling exercises, guidelines for practitioners and research professionals, and supporting documentation; or deliver preliminary findings. All RAND reports undergo rigorous peer review to ensure that they meet high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.