Estimating the Comparative Effectiveness of Screening Tests

Presented by Carolyn Rutter, The Group Health Research Institute

Wednesday, November 13th, 2013
Time: 12:00 PM – 1:30 PM Pacific / 3:30 PM – 4:30 PM Eastern
Host Location: Santa Monica, conference room 5312
Other Locations: Pittsburgh (room 6207b) & Washington, DC (room 7126)


The Screening tests are a key component of preventive care. By their nature, they are widely used in otherwise healthy populations to detect conditions that may be rare. Often, a regimen of screening is undertaken, which involves a sequence of tests over time. Ultimately, the benefit (or harm) from tests occurs only through an action that is triggered by test outcomes. These issues must be acknowledged in CER for screening tests. While the operating characteristics of a test (sensitivity and specificity) remain important, they are not of primary interest. For cancer screening, the most relevant measure of effectiveness is the difference in mortality for screened versus unscreened patients. From the patient, clinical, and policy perspectives, the most important comparisons focus on differences in screening regimens, that is, when to be screened, how often to screen, and when to stop screening. The question of who to screen and how to tailor screening regimens are also of great interest. In this presentation, I discuss both data analytic and simulation modeling methods for estimating CER of screening tests, focusing on colorectal cancer screening.

About the Presenter

Carolyn Rutter is a biostatistician and Senior Investigator at the Group Health Research Institute, where she leads two teams focused on colorectal cancer screening. The first is focused on microsimulation modeling, and is part of the Cancer Intervention and Surveillance Modeling Network (aka CISNET). This team is focused on extension and application of the CRC-SPIN model, and broader issues related to microsimulation model calibration and validation. The second team is focused on empirical studies of colorectal cancer screening and improving methods for estimating screening effectiveness. This team is part of the NCI-funded PROSPR initiative that aims to better understand the screening process, including the potential to personal screening. Carolyn received her PhD in biostatistics at UCLA, and post-doctoral training at Harvard’s Department of Health Care Policy where she acquired her continuing interests in methods for estimating diagnostic test accuracy, Bayesian methods, and meta-analytic approaches.

To Attend

Visitors to RAND's Santa Monica and Pittsburgh locations are welcome to attend & must RSVP at least one day prior to the seminar. To ensure attendance please, contact Donna Mead with your name, company or affiliation & national citizenship (for security purposes).

Sponsored by the RAND Statistics Group