Crowdsourcing for Systematic Reviews
Published in: The Healthcare Improvement Studies Institute [Epub June 2018]
Posted on RAND.org on June 21, 2018
Systematic reviews often require thousands of hours from expert reviewers to search, screen, appraise and synthesise relevant literature. As the literature base continues to grow, researchers have begun to explore citizen science approaches to conduct systematic reviews, with the aim of generating evidence more quickly and efficiently. One such approach is crowdsourcing, which draws on a large pool of people who individually make small contributions that add up to big efforts. We explore the promising, albeit limited, evidence of the benefits of this approach, which suggests that citizen science approaches like crowdsourcing can make the systematic review process more efficient, timely and relevant. With appropriate quality control mechanisms and participant training in place, the outputs from crowdsourced reviews may be of a high enough quality to meet the threshold of a traditional 'gold standard' systematic review. Some challenges arise when involving a large group of participants with diverse backgrounds in crowdsourced systematic reviews. Participant drop-out rates can be high. To encourage participation and retention, crowd participants should be provided with clear goals and welldefined tasks, as well as feedback and rewards. As in other types of research, it is important to ensure that projects are conducted ethically and responsibly, particularly in relation to potential crowd participant exploitation.