Value-Added Modeling 101

Using Student Test Scores to Help Measure Teaching Effectiveness

Value-added models (VAMs) attempt to measure a teacher’s effect on his or her students’ achievement. This involves using a variety of measures to predict each student’s test score and then comparing these predicted scores to how the teacher’s students actually scored on the test.

  • VAMs attempt to estimate a teacher’s contribution to students’ progress over time.

    The goal of VAMs is to allow educators and policymakers to make apples-to-apples comparisons among teachers in terms of how much content their students learn each year, regardless of the students’ characteristics. Since the types of students served by teachers vary widely across or even within schools, this means focusing not on how students test at a single point in time but rather on how much improvement they make from one testing period to the next.

  • VAMs use statistical methods to account for students’ prior characteristics.

    There is no single VAM that all researchers use, but all models account in some way for prior test scores of a teacher’s students. One common VAM method works like this: Mr. Johnson teaches sixth-grade math. To estimate his added value, statisticians obtain the fifth-grade test scores of all his students, as well as information about their backgrounds (such as whether they were in a gifted program or a special education program). Those data are used to predict the sixth-grade math scores of his students. Mr. Johnson’s value-added estimate is the average of the differences between the actual and predicted scores of his students. If Mr. Johnson’s students consistently score higher than predicted, he is considered a high value-added teacher; conversely, if his students consistently score lower than predicted, he is considered a low value-added teacher.

  • Value-added estimates enable relative judgments but are not absolute indicators of effectiveness.

    Because VAMs adjust for students’ prior performance and background characteristics, one teacher’s value-added estimate can generally be compared with another’s. For this reason, VAMs are sometimes used to rank teachers who teach the same subject and grade level. However, there is no specific number that identifies “acceptable” performance. Furthermore, measurement error in the estimates means that VAM is most useful for identifying teachers who have especially high or low effectiveness scores, rather than making distinctions between closely ranked teachers. Finally, VAMs can only be used for teachers who are teaching grades and subjects for which their students take a standardized test at the end of the current year and also took one at the end of the previous year.

  • Value-added estimates contain information about teacher effectiveness but are imprecise.

    Most research suggests that although value-added estimates contain valuable information about teacher effectiveness, they are also imprecise. Thus, while it is true that high value-added teachers, on average, are better than low value-added teachers at improving their students’ test scores, it is less certain that a specific high value-added teacher is necessarily better than a low value-added teacher at doing this.