Disease Management Evaluation

A Comprehensive Review of Current State of the Art

by Annalijn Conklin, Ellen Nolte

This Article

RAND Health Quarterly, 2011; 1(1):7


Many countries across Europe and elsewhere have been experimenting with various structured approaches to manage patients with chronic illness as a way to improve quality of care, reduce costs and lead to better population health outcomes in the long run. Despite a body of studies of disease management interventions, uncertainty about the effects of these remains not least because current guidance on evaluation methods and metrics require further development to enhance scientific rigour while also being practical in routine operations. This article provides details from a report that reviews the academic and grey literature to help advance the task of improving the science of assessing disease management initiatives in Europe. Challenges identified are methodological, analytical and conceptual in nature, with a key issue being the establishment of the counterfactual. An array of sophisticated statistical techniques and analytical frameworks can assist in the construction of a sound comparison strategy when a randomised controlled trial is not possible. Issues to consider include: a clear framework of the mechanisms of action and expected effects of disease management; an understanding of the characteristics of disease management (scope, content, dose, context), and of the intervention and target populations (disease type, severity, case-mix); a period of observation over multiple years; and a logical link between performance measures and the intervention’s aims and underlying theory of behaviour change.

For more information, see RAND TR-894-EC at https://www.rand.org/pubs/technical_reports/TR894.html

Full Text

Chronic diseases account for a large share of healthcare costs while the care for people with such conditions remains suboptimal. Many countries in Europe are experimenting with new, structured approaches to better manage the care of patients with chronic illness and so improve its quality and ultimately patient health outcomes. While intuitively appealing, the evidence such approaches achieve these ends remains uncertain. This is in part because of the lack of widely accepted evaluation methods to measure and report programme performance at the population level in a scientifically sound fashion that is also practicable for routine operations. This report aims to help advance the methodological basis for chronic disease management evaluation by providing a comprehensive inventory of current evaluation methods and performance measures, and by highlighting the potential challenges to evaluating complex interventions such as disease management.

Challenges as identified here are conceptual, methodological, and analytical in nature. Conceptually, evaluation faces the challenges of a diversity of interventions subsumed under a common heading of “disease management” which are implemented in various ways, and a range of target populations for a given intervention. Clarifying the characteristics of a disease management intervention is important because it would permit an understanding of the effects expected and how the intervention might produce them and also allow for the replication of the evaluation and the implementation of the intervention in other settings and countries. Conceptual clarity on the intervention's target and reference populations is equally necessary for knowing whether an evaluation's comparator group represents the counterfactual (what would have happened in the absence of a given intervention). Other conceptual challenges relate to the selection of evaluation measures which often do not link indicators of effect within a coherent framework to the aims and elements (patient-related and provider-directed) of a disease management intervention and to the evaluation's objectives.

The establishment of the counterfactual is indeed a key methodological and analytical challenge for disease management evaluation. In biomedical sciences, randomised controlled trials are generally seen as the gold standard method to assess the effect of a given intervention because causality is clear when individuals are randomly allocated to an intervention or a control group. In the context of multi-component, multi-actor disease management initiatives, this design is frequently not applicable because randomisation is not possible (or desirable) for reasons such as cost, ethical considerations, generalisability, and practical difficulties of ensuring accurate experimental design. As a consequence, alternative comparison strategies need to be considered to ensure findings of intervention Disease management evaluation effect(s) are not explained by factors other than the intervention. Yet, as alternative strategies become less of a controlled experiment, there are more threats to the validity of findings from possible sources of bias and confounding (e.g. attrition, case-mix, regression to the mean, seasonal and secular trends, selection, therapeutic specificity and so on) which can undermine the counterfactual and reduce the utility of the evaluation.

As many design options have been suggested for disease management evaluation, choosing an appropriate study design can be an important methodological approach to selecting a suitable control group for an alternative comparison strategy that still achieves the goals of randomisation in disease management evaluation. Equally, there are various analytical approaches to such construction whereby controls are randomly matched to intervention participants can be created through predictive modelling or propensity scoring techniques, or they are created statistically by developing baseline trend estimates on the outcomes of interest. Whichever the approach taken to construct a sound comparison strategy, there will be a set of limitations and analytical challenges which must be carefully considered and may be addressed at the analysis stage. And, while some statistical techniques can be applied ex post to achieve the objectives of randomised controlled trials, such as regression discontinuity analysis, it is better to plan prospectively before a given intervention is implemented to obtain greater scientific returns on the evaluation effort.

Other methodological and analytical challenges in disease management evaluation also require thoughtful planning such as the statistical power of evaluation to detect a significant effect given the small numbers, non-normal distribution of outcomes and variation in dose, case-mix, and so on, typical of disease management initiatives. Several of these challenges can be addressed by analytical strategies to assure useful and reliable findings of disease management effects such as extending the measurement period from 12 month to 18 months and adjusting for case-mix to calculate sample size, for example. But, ultimately, what is required is a clear framework of the mechanisms of action and expected effects that draws on an understanding of the characteristics of disease management (scope, content, dose, context), those of the intervention and target populations (disease type, severity, case-mix), an adequate length of observation to measure effects and the logical link between performance measures and the intervention's aims, elements and underlying theory driving the anticipated behaviour change.

RAND Health Quarterly is produced by the RAND Corporation. ISSN 2162-8254.