Impact of Changing the Statistical Methodology on Hospital and Surgeon Ranking
The Case of the New York State Cardiac Surgery Report Card
Published in: Medical Care, v. 44, no. 4, Apr. 2006, p. 311-319
Posted on RAND.org on January 01, 2006
BACKGROUND: Risk adjustment is central to the generation of health outcome report cards. It is unclear, however, whether risk adjustment should be based on standard logistic regression, fixed-effects or random-effects modeling. OBJECTIVE: The objective of this study was to determine how robust the New York State (NYS) Coronary Artery Bypass Graft (CABG) Surgery Report Card is to changes in the underlying statistical methodology. METHODS: Retrospective cohort study based on data from the NYS Cardiac Surgery Reporting System on all patient undergoing isolated CABG surgery in NYS and who were discharged between 1997 and 1999 (51,750 patients). Using the same risk factors as in the NYS models, fixed-effects and random-effects models were fitted to the NYS data. Quality outliers were identified using 1) the ratio of observed-to-expected mortality rates (O/E ratio) and confidence intervals (CIs) calculated using both parametric (Poisson distribution) and nonparametric (bootstrapping) techniques; and 2) shrinkage estimators. RESULTS: At the surgeon level, the standard logistic regression model, the fixed-effects model, and the fixed-effects component of the random-effects model demonstrated near-perfect agreement on the identity of quality outliers using a quality indicator based on the O/E ratio and the Poisson distribution. Shrinkage estimators identified the fewest outliers, whereas the O/E ratios with bootstrap CI identified the greatest number of outliers. The results were similar for hospitals, except that the fixed-effects model identified more outliers than either the NYS model or the fixed-effects component of the random-effects model. CONCLUSION: Shrinkage estimators based on random-effects models are slightly more conservative in identifying quality outliers compared with the traditional approach based on fixed-effects modeling and standard regression. Explicitly modeling surgeon provider effect (fixed-effects and random-effects models) did not significantly alter the distribution of quality outliers when compared with standard logistic regression (which does not model provider effect). Compared with the standard parametric approach, the use of a bootstrap approach to construct 95% confidence interval around the O/E ratio resulted in more providers being identified as quality outliers.