Is Cross-Disciplinary Working the Way Forward to Improve Evaluation?

commentary

(The RAND Blog)

Blue abstract mesh of connecting lines

Photo by pingingz/Fotolia

by Tom Ling

December 23, 2015

RAND Europe is well set up to support cross-disciplinary working, with a wide range of disciplines represented in the expertise and experience of our colleagues and the needs of decisionmakers with whom we work. Therefore, it's only natural that I'm interested in working across disciplinary boundaries in my work as an evaluator.

Most recently, I have been further reminded of the benefits of bringing diverse disciplines together through my participation in a cross-disciplinary reading group on health inequalities at St John's College, Cambridge.

By bringing together a diverse cross-disciplinary group, we have been able to think differently about how social, behavioural, biological, economic, and historical processes combine to result in deep and persisting health inequalities. We have also been able to think about how experts in different disciplines provide a different lens for understanding the same underlying mechanisms.

Following discussions within the group, I concluded that working within a single discipline might help identify how to make an existing system work better (for example, an economist might identify the environmental benefits from a plastic bag tax), but a cross-disciplinary conversation might help think through how to make better systems.

Although cross-disciplinary working can refresh thinking in exciting ways, the analytical danger of this approach is that we finish up with a set of 'Russian dolls': for example, biology in the inner doll, psychology in the middle doll, and social science in the outer doll, with none of these interacting. A better method is needed to understand how the levels interact in real circumstances.

However, this is hard, not least because disciplines often put up professional barriers, adopting (for other disciplines) impenetrable vocabularies and conceptual frameworks. Furthermore, there are few rewards for experts in one discipline talking to experts in other disciplines. Consequently, cross-disciplinary studies of pressing problems are rare, with research institutions struggling to do this effectively. For example, research and evaluation to support health policy often reflects a narrow disciplinary skill set — epidemiology, studies of individual behaviour, and implementation science — even when this is clearly insufficient.

To design and evaluate public policies to address health inequalities, for example, we might need to draw upon the history of the urban development; studies of foetal and child development; knowledge of the capacity of the brain to adapt to its environment; and, at the same time, an understanding of the political and social drivers of resource allocation in society. Rather than evaluations looking like Russian dolls, you would have interacting elements, each as important as the other, that come together to provide an analytically rich account, capable of guiding practical policy responses.

For evaluators (and for the decisionmakers we seek to serve), the danger is the opposite of the Russian doll problem: It is that we become completely overwhelmed by the sheer complexity of the interactions of biological, historical, social, and economic processes. So for policymakers, disciplines provide valuable lenses through which we can focus on a discrete set of mechanisms, isolated from their context, and really understand them. They can reduce complexity to something more manageable and make scientific progress possible. At the same time, there are also often barriers to developing effective evaluations and providing policymakers with the underlying evidence base that is sufficient and balanced.

There are no easy answers to overcoming these challenges, but I'm fairly certain that part of it involves creativity and imagination (and not simply the accumulation of ever more data). Furthermore, making the effort to see the interdisciplinary dimensions of the problem can lift policymakers out of the zone of simplicity ('for every complex problem, there is a simple answer that is wrong') and into more interesting areas where we look to build flows of evidence from across disciplines, which could inform a rigorous approach to learning and adaptation. We need to take the disciplines out of their glorious isolation and encourage them to interact in the real world of public policymaking.


Tom Ling heads RAND Europe's evaluation practice and is a senior research leader.

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.