Use and impact of automated technologies for evidence synthesis in public health
Because the review of published evidence is a vital but time-consuming aspect of public health decision making, researchers suggest several ways to support the greater use of automation for evidence reviews within public health.
What is the issue?
Reviewing published evidence is a vital aspect of public health decision making to ensure decisions are made using up-to-date and robust data. However, the increasing volume of published literature (e.g. during the COVID-19 pandemic) is making it more difficult to synthesise data and do so in a timely manner.
(Semi-)automated tools and technologies are being developed to help overcome this challenge. These types of tools can be used at various stages within the evidence review pathway, such as helping to create better search strategies, prioritising articles for screening and extracting information automatically. However, the extent to which these types of tools are used in public health, and the barriers to their use, have not been explored in detail.
How did we help?
RAND Europe was commissioned by the European Centre for Disease Prevention and Control (ECDC) to explore the following research questions:
- What types of automated technologies are available and have been used for evidence synthesis, and how have these been used?
- At what stage in the evidence synthesis pathway have these automated technologies been used (e.g. literature searching, screening, data extraction)?
- What are the experiences and lessons learned from using automated technologies in evidence synthesis?
- What is the impact of using automated technology for evidence synthesis?
- What are the needs of EU/European Economic Area (EEA) competent health authorities in conducting technology-assisted evidence syntheses and how can these gaps be filled?
To explore these questions, we conducted a systematic review of the literature and qualitative data collection (via focus groups, interviews and a survey) with public health competent authorities in EU/EEA Member States, representatives from ECDC, evidence synthesis experts and others working in the area of public health and infectious diseases.
What did we find?
Automated approaches for conducting evidence reviews are not widely used within public health. This is in part due to a number of challenges faced by public health decisionmakers when attempting to use these tools. For example, human input, sometimes at significant levels, can still be required even when using automated tools. This creates a particular challenge within public health where staff time and capacity is often very limited already (such as during the pandemic).
Other challenges include the need for staff training, ‘culture shock’ and concerns over changes to normal ways of working, and difficulty applying tools to topics they have not previously been used for. There are also concerns over the transparency of the algorithms used by the tools and subsequent issues around trust in this type of technology. These challenges mean the benefits automation can provide for evidence synthesis are not fully realised, including a reduction in the time, effort and resources needed to conduct a review.
What can be done?
We identified a number of ways to support the greater use of automation for evidence reviews within public health, including:
- Offering staff working in public health training and development opportunities to improve their knowledge on how to harness the use of automated tools (including the sharing of information of what tools are available to use).
- Offering staff the time and space to try new ways of working in relation to automation.
- Having leadership in place which supports and encourages the use of automation for evidence reviews (e.g. providing the time to take part in training). Clarity from leaders on how the adoption of the automated technology will work and the impact it may have on staff roles is also beneficial.
- Offering financial support for the purchase of licenses or taking greater advantage of freely available software.
- Having greater collaboration between those working in public health and experts in automation and/or evidence reviews to better share knowledge and learning.
- Developing standards for the use of automated technologies, including guidelines for appropriate and robust use of these tools to meet peer reviewer standards.
- Creating more and higher-quality datasets for the training of algorithms used in automated tools to improve their performance.