Citizen science: crowdsourcing for systematic reviews

team work concept art

Photo by zimmytws/adobestock

Systematic reviews are often time-consuming and costly. In the second of a three-part learning report series for THIS Institute (The Healthcare Improvement Studies Institute), researchers outline how crowdsourcing can make the systematic review process more efficient without sacrificing quality.

Background

Systematic reviews are often time-consuming and costly. Given the exponential growth in scientific research, these challenges will only get worse. A recent report by RAND Europe for THIS Institute showed how crowdsourcing is engaging diverse groups of people in scientific research, and its benefits. Now, a new report seeks to answer the question: can crowdsourcing also have benefits for systematic reviews?

Goals

In the second of a three-part learning report series, RAND Europe looked into the use of crowdsourcing for systematic reviews. Conducted on behalf of THIS Institute, the report outlines the benefits of crowdsourcing for systematic reviews, with advice on how to ensure the quality of research is maintained.

Findings

Benefits

Crowdsourcing can make the systematic review process more efficient without sacrificing quality. There are tasks within systematic reviews that can be accomplished more quickly and at a lower cost through crowdsourcing, rather than the traditional researcher-led approach. In one case study, engaging people to make systematic review screening decisions through crowdsourcing platforms reduced costs by up to 88%.

Crowdsourced systematic reviews are of high quality. In another example, crowdsourced participants included 95–99% of the citations an experienced reviewer had included.

Challenges

For crowdsourced research to be successful, the right participants need to be attracted and retained. Achieving this can be very difficult and resource-intensive.

Researchers also need to consider intellectual property rights, copyright agreements and ethical issues.

Best practices

High drop-out rates can hinder crowdsourced systematic reviews. Providing participants with clear instructions, engaging with them consistently over the course of the project, setting project timelines and making sure tasks are easy to follow can reduce drop-out rates. Processes need to be put in place to ensure crowdsourced systematic reviews are of a high standard. There are many approaches that can be taken:

  • Researchers can apply a threshold of decision consensus among crowd participants to manage participant error.
  • Training on the systematic review task can be provided.
  • Quality control mechanisms can be used to review crowd participants’ work, either before they begin their task or during their work.

As in other types of research, it is important to ensure that projects are conducted ethically and responsibly, particularly in relation to crowd participant exploitation.

Methodology

For this research, we conducted a rapid review of the available literature, desk research and interviews with six academic researchers who have used crowdsourcing to conduct systematic reviews.


Read the full study