Cross-Validation Performance of Mortality Prediction Models
ResearchPublished 2004
ResearchPublished 2004
Mortality prediction models hold substantial promise as tools for patient management, quality assessment, and perhaps health care resource allocation planning. Yet we know relatively little about the predictive validity of these models. This study, reprinted from Statistics in Medicine, compares the cross-validation performance of seven statistical models of patient mortality: (1) ordinary-least-squares (OLS) regression predicting 0/1 death status six months after admission; (2) logistic regression; (3) Cox regression; (4-6) three unit-weight models derived from the logistic regression, and (7) a recursive partitioning classification technique (CART). The authors calculated the following performance statistics for each model in both a learning and test sample of patients, all of whom were drawn from a nationally representative sample of 2,558 Medicare patients with acute myocardial infarction: overall accuracy in predicting six-month mortality, sensitivity and specificity rates, positive and negative predictive values, and percent improvement in accuracy rates and error rates over model-free predictions. The authors developed ROC curves based on logistic regression, the best unit-weighted model, the single best predictor variable, and a series of CART models generated by varying the misclassification cost specifications. The models reduced model-free error rates at the patient level by 8-22% in the test sample. The authors found that the performance of the logistic regression models was marginally superior to that of other models. The areas under the ROC curves for the best models ranged from 0.61 to 0.63. Overall predictive accuracy for the best models may be adequate to support activities such as quality assessment that involve aggregating over large groups of patients, but the extent to which these models may be appropriately applied to patient-level resource allocation planning is less clear.
Originally published in: Statistics in Medicine, v. 11, 1992, pp. 475-489.
This publication is part of the RAND reprint series. The reprint series, a product of RAND from 1992 to 2011, included previously published journal articles, book chapters, and reports that were reproduced by RAND with the permission of the publisher. RAND reprints were formally reviewed in accordance with the publisher's editorial policy and compliant with RAND's rigorous quality assurance standards for quality and objectivity. For select current RAND journal articles, see external publications.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.