Evidence-based or evidence-informed policymaking and practice is increasingly championed by governments and decisionmakers. To make this a reality, it is important to know how stakeholders, such as policymakers and practitioners, already view and use evidence. Therefore, the support and engagement of these relevant stakeholders may be invaluable. Understanding how stakeholders make sense of and prioritise available evidence is an important element in improving the policies and practices that affect people's lives.
In complex fields of social policies, the type and number of relevant stakeholders can vary widely. Each group of stakeholders may come with their own experiences, training, and identity that shapes how they may engage with different types of evidence. Such multidisciplinary environments can make it challenging to achieve consensus on how available evidence is understood and interpreted.
A recent RAND Europe project for the Early Intervention Foundation (EIF) examined how the EIF's many and varied stakeholders understand the existing evidence base—and its implications for policy and practice—on Adverse Childhood Experiences (ACEs). ACEs encompass 10 categories of child maltreatment and family dysfunction: three kinds of abuse (physical, emotional, sexual); two kinds of neglect (physical and emotional); and five kinds of household dysfunction (mental illness, mother treated violently, divorce, incarcerated relative, and substance abuse).
While these 10 categories all interact, they tend to relate to different areas of professional expertise and responsibility. Professional fields have each tended to develop their own understanding of the evidence—what they think does and does not work in addressing ACEs. It has been argued that there are currently many misconceptions regarding the existing evidence base and disagreement exists as to the best next steps in research, policy, and practice to improve outcomes for children affected by ACEs. Accordingly, our research team at RAND Europe conducted a consensus-seeking exercise with EIF stakeholders by using a three-survey Delphi approach.
While the Delphi research approach is a valuable tool in helping to find consensus among stakeholders on what direction future policy and practice needs to take, there are some common challenges.Share on Twitter
While the Delphi research approach is a valuable tool in helping to find consensus among stakeholders on what direction future policy and practice needs to take, there are some common challenges. These include that the approach can be time-consuming for both researchers and participants. This can inhibit people's willingness to participate and can pose challenges for project resources and timelines. Also, it can be difficult to identify and engage the relevant experts in a multidisciplinary and complex area.
To overcome these challenges, we made some adjustments to the 'traditional' Delphi approach. The process involved conducting a modified Delphi with three surveys building cumulatively towards identifying areas of agreement and disagreement among EIF's stakeholders. The focus was on ACE policies, practices, and research.
It gave rise to the following three key pieces of learning for Delphi methods:
Using Participants' Own Language to Provide a Level Playing Field of Terminology and Perspective
Different stakeholders use different words and concepts to describe their world. We wanted to avoid using any unintended bias by using our own terminology and so opted as far as practicable to use the vocabulary provided to us by participants responding to the survey. Despite these efforts a small number appeared to be uncomfortable with the language we used in the second and third stages of the Delphi. However, it was also clear that groups with different 'situated knowledge' could enter interdisciplinary conversations and we concluded that Delphis could work in such an environment if handled with sensitivity and care by those developing the survey.
Minimising Selection Criteria to Ensure Engagement with a Diverse Range of Practitioners
Working towards achieving consensus among experts requires deciding who these experts are and what exactly makes them an 'expert.' A person's expertise is commonly evaluated based on their qualifications, publication record, and reputation in the field. We avoided anything like a minimum education requirement and preferred instead to use EIF's contacts to identify those with a known engagement with (and commitment to) this topic. We used the EIF publication, with which most were familiar, as a common reference point. This approach achieved a meaningful debate and dialogue among participants but illustrated an inherent tension within the Delphi method.
Delphis reveal the reasoning of participants; understanding how reasoning is shared or differs across groups is fundamental to understanding how they engage with each other, their clients, and the evidence. It reveals how participants construct meaning in the world in which they act. By relaxing controls over the range of participants and being open to defining issues in participants' own words, we gained an understanding of their sense-making. However, as a result it becomes harder to conclude, for example, what percentage of respondents agree or disagree with particular statements.
But the strength of the Delphi method has always been pragmatic rather than epistemological. It provides an anonymous space where participants may honestly exchange views without the need to physically colocate and it creates a more holistic debate without becoming narrowly dominated by one set of opinions or another. Minimising the selection criteria supported a more inclusive and rounded set of insights.
Prioritising Areas of Disagreement, to Keep Survey Length Manageable
Despite its advantages, participating in a Delphi can be time-consuming (PDF)—for both the participants and the researchers. To meet the project schedule, as well as ensure participant engagement, we had to decide how to make the voting options in surveys 2 and 3 manageable in the time most respondents might commit to completing the survey. This was achieved by grouping the more than 200 qualitative responses received to the first survey, which asked what next steps to take in ACE-related policy and practice, into nine themes.
Within these themes we tried to cover all suggestions we had received, selecting the words and phrases provided by participants that best conveyed the sentiments of their suggestions. In survey 3, rather than asking participants to vote again on all statements that had been included in survey 2, we focused on those areas where disagreement remained and asked participants to continue to agree or disagree with them considering the other participants' responses. By making these choices, we were able to keep the surveys to a manageable length (between 10 and 20 minutes, depending on how much time participants wanted to spend on the open text responses).
The origins of Delphi lie in seeking consensus among an expert group with a shared professional background and a shared body of knowledge underpinning their expertise. It seems that the approach might be modified to engage with stakeholders with different, overlapping expertise and, in particular, open a way to understanding the different ways groups make sense of the evidence and prioritise it in their policymaking and practice. This knowledge can help organisations such as EIF to identify relevant evidence and communicate it with a view to benefiting children who may be at risk from ACEs.
Tom Ling is senior research leader and head of Evaluation at RAND Europe. Michaela Bruckmayer is an analyst at RAND Europe.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.