Adjusting for attrition in school-based samples : bias, precision, and cost trade-offs of three methods
Attrition in longitudinal studies can introduce nonresponse bias. Using data from a multi-wave school-based study of adolescents, the authors compare substance-use estimates across methods, validate methods to correct for nonresponse by seeing how well they "postdict" known overall sample baseline values, and calculate the relative efficiency of each approach with respect to a known "gold standard." In these data, weighting for non-response worked very well, but sample-selection modeling requires assumptions that are not met in this setting, and severe bias results. The high costs associated with full nonrespondent tracking efforts may be avoidable if weighting works as well as it did here.