Cover: A Pilot Study Using Machine Learning and Domain Knowledge to Facilitate Comparative Effectiveness Review Updating

A Pilot Study Using Machine Learning and Domain Knowledge to Facilitate Comparative Effectiveness Review Updating

Published in: A Pilot Study Using Machine Learning and Domain Knowledge to Facilitate Comparative Effectiveness Review Updating / Siddartha Dalal et al. Methods Research Report (Prepared by the Southern California Evidence-based Practice Center under Contract No. 290-2007-10062-I). AHRQ Publication No. 12-EHC069-EF. (Rockville, MD: Agency for Healthcare Research and Quality, Sep. 2012), 50 p

Posted on Sep 1, 2012

by Siddhartha Dalal, Paul G. Shekelle, Susanne Hempel, Sydne J. Newberry, Aneesa Motala, Kanaka Shetty

BACKGROUND: Comparative effectiveness reviews need to be updated frequently to maintain their relevance. Results of earlier screening efforts should be useful in reducing the screening of thousands of newer citations for articles relevant to efficacy/effectiveness and adverse effects (AEs). METHODS: We collected 14,700 PubMed® citation classification decisions from a 2007 comparative effectiveness review of interventions to prevent fractures in persons with low bone density (LBD). We also collected 1,307 PubMed citation classification decisions from a 2006 comparative effectiveness review of off-label uses of atypical anti-psychotic drugs (AAP). We first extracted explanatory variables from the MEDLINE® citation related to key concepts, including the intervention, outcome, and study design. We then used the data to empirically derive statistical models (based on sparse generalized linear models with convex penalties [GLMnet] and gradient boosting machine [GBM]) that predicted inclusion in the AAP and LBD reviews. Finally, we evaluated performance on the 11,003 PubMed citations retrieved for the LBD and AAP updated reviews. MEASUREMENTS: Sensitivity (percentage of relevant citations corrected identified), positive predictive value (PPV, percentage of predicted relevant citations that were truly relevant), and workload reduction (percentage of screening avoided). RESULTS: GLMnet- and GBM-based models performed similarly, with GLMnet (results shown below) performing slightly better. The GLMnet-based model yielded sensitivities of 0.921 and 0.905 and PPVs of 0.185 and 0.102 when predicting articles relevant to the AAP and LBD efficacy/effectiveness analyses respectively (using a threshold of p ≥0.02). GLMnet performed better when identifying AE-relevant articles for the AAP review (sensitivity=0.981) than for the LBD review (0.685). When attempting to maximize sensitivity, GLMnet achieved high sensitivities (0.99 for AAP and 1.0 for LBD) while reducing projected screening by 55.4 percent (1990/3591 articles for AAP) and 63.2 percent (4,454/7,051 for LBD). CONCLUSIONS: In this pilot study, we evaluated statistical classifiers that used previous classification decisions and key explanatory variables derived from MEDLINE indexing terms to predict inclusion decisions on two simulated comparative effectiveness review updates. The system achieved higher sensitivity in evaluating efficacy/effectiveness articles than in evaluating LBD AE articles. In the simulation, this prototype system reduced workload associated with screening updated search results for all relevant efficacy/effectiveness and AE articles by more than 50 percent with minimal or no loss of relevant articles. After refinement, these document classification algorithms could help researchers maintain up-to-date reviews.

This report is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.