Surveillance and Identification of Signals for Updating Systematic Reviews

Implementation and Early Experience

Sydne J. Newberry, Paul G. Shekelle, Nadera Ahmadzai, Aneesa Motala, Alexander Tsertsvadze, Margaret A. Maglione, Mohammed T. Ansari, Susanne Hempel, Sophia Tsouros, Jennifer J. Schneider Chafen, et al.

ResearchPosted on rand.org Jul 28, 2016Published in: Surveillance and Identification of Signals for Updating Systematic Review: Implementation and Early Experience. Methods Research Report / Newberry S.J. et al. / (Prepared by the RAND Corporation, Southern California Evidence-based Practice Center under Contract No. 290-2007-10062-I and University of Ottawa Evidence-based Practice Center 290-2007-10059-I). AHRQ Publication No. 13-EHC088-EF. (Rockville, MD: Agency for Healthcare Research and Quality; June 2013)

Background

The question of how to determine when a systematic review needs to be updated is of considerable importance. Changes in the evidence can have significant implications for clinical practice guidelines and for clinical and consumer decision-making that depend on up-to-date systematic reviews as their foundation. Methods have been developed for assessing signals of the need for updating, but these methods have been applied only in studies designed to demonstrate and refine the methods , and not as an operational component of a program for systematic reviews.

Objectives

The Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice (EPC) program commissioned RAND's Southern Californian Evidence-based Practice Center (SCEPC) and University of Ottawa Evidence-based Practice Center (UOEPC), with assistance from the ECRI EPC, to develop and implement a surveillance process for quickly identifying Comparative Effectiveness Reviews (CERs) in need of updating.

Approach

We established a surveillance program that implemented and refined a process to assess the need for updating CERs. The process combined methods developed by the SCEPC and the UOEPC for prior projects on identifying signals for updating: an abbreviated literature search, abstraction of the study conditions and findings for each new included study, solicitation of expert judgments on the currency of the original conclusions, and an assessment of whether the new findings provided a signal according to the Ottawa Method and/or the RAND Method, on a conclusion-by-conclusion basis. Lastly, an overall summary assessment was made that classified each CER as being of high, medium, or low priority for updating. If a CER was deemed to be a low or medium priority for updating, the process would be repeated 6 months later; if the priority for updating was deemed high, the CER would be withdrawn from subsequent 6-month assessments.

Results and Conclusions

Between June 2011 and June 2012, we established a surveillance process and completed the evaluation of 14 CERs. Of the 14 CERs, 2 were classified as high priority, 3 as medium priority, and 9 as low priority. Of the 6 CERs released prior to 2010 (meaning over 18 months before the start of the program) 2 were judged high priority, 2 were judged medium priority, and 2 were judged low priority for updating. We have shown it is both useful and feasible to do such surveillance, in real time, across a program that produces a large number of systematic reviews on diverse topics.

Topics

Document Details

  • Availability: Non-RAND
  • Year: 2013
  • Pages: 1
  • Document Number: EP-66573

This publication is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.