Cover: Detection of Associations Between Trial Quality and Effect Sizes

Detection of Associations Between Trial Quality and Effect Sizes

Published In: Detection of Associations Between Trial Quality and Effect Sizes, AHRQ Publication No. 12EHC010-EF (Rockville, MD: AHRQ, Agency for Healthcare Research and Quality, Jan. 2012)

Posted on 2012

by Susanne Hempel, Jeremy N. V. Miles, Marika Booth, Zhen Wang, Breanne Johnsen, Sally C. Morton, Tanja Perry, Di Valentine, Paul G. Shekelle

OBJECTIVES: To examine associations between a set of trial quality criteria and effect sizes and to explore factors influencing the detection of associations in meta-epidemiological datasets. DATA SOURCES: The analyses are based on four meta-epidemiological datasets. These datasets consist of a number of meta-analyses; each contained between 100 and 216 controlled trials. These datasets have "known" qualities, as they were used in published research to investigate associations between quality and effect sizes. In addition, we created datasets using Monte Carlo simulation methods to examine their properties. REVIEW METHODS: We identified treatment effect meta-analyses and included trials and extracted treatment effects for four meta-epidemiological datasets. We assessed quality and risk of bias indicators with 11 Cochrane Back Review Group (CBRG) criteria. In addition, we applied the Jadad criteria, criteria proposed by Schulz (e.g., allocation concealment), and the Cochrane Risk of Bias tool. We investigated the effect of individual criteria and quantitative summary scores on the reported treatment effect sizes. We explored potential reasons for differences in associations across different meta-epidemiological datasets, clinical fields and individual meta-analyses. We investigated factors that influence the power to detect associations between quality and effect sizes in Monte Carlo simulations. RESULTS: Associations between quality and effect sizes were small, e.g. the ratio of odds ratios (ROR) for unconcealed (vs. concealed) trials was 0.89 (95% CI: 0.73, 1.09, n.s.), but consistent across the CBRG criteria. Based on a quantitative summary score, a cut-off of six or more criteria met (out of 11) differentiated low- and high-quality trials best with lower quality trials reporting larger treatment effects (ROR 0.86, 95% CI: 0.70, 1.06, n.s.). Results for evidence of bias varied between datasets, clinical fields, and individual meta-analyses. The simulations showed that the power to detect quality effects is, to a large extent, determined by the degree of residual heterogeneity present in the dataset. CONCLUSIONS: Although trial quality may explain some amount of heterogeneity across trial results in meta-analyses, the amount of additional heterogeneity in effect sizes is a crucial factor in determining when associations between quality and effect sizes can be detected. Detecting quality moderator effects requires more statistically powerful analyses than are employed in most investigations.

This report is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.