Proposition: "This house believes that the widespread use of comparative effectiveness reviews and cost/benefit analyses will stifle medical innovation and lead to an unacceptable rationing of health care."
The proposition under discussion has multiple moving parts and is stated in a vague language that guarantees different interpretations by different readers. To put my commentary in context, I will focus on the proposition from a U.S. perspective, take literally that we are considering reviews of existing studies rather than new head-to-head trials and consider the implementation challenges rather than the theory of the case alone.
What effect might widespread use of comparative effectiveness reviews have on innovation? Greater emphasis on comparative effectiveness sends a signal that new products or procedures or devices that are introduced need to be superior to those that are already in use because they produce better outcomes, have fewer side effects or are easier to use. This could set a higher bar for innovation: It is not enough for something to be new, it actually has to be an improvement over the choices we already have. Some interventions that had previously entered the market would not. If they are not better or less expensive, however, is it fair to say that is stifling innovation? I think not.
In the process of commissioning such reviews, there is an opportunity to more clearly define the types of outcomes to be evaluated. For example, is the effectiveness in question related to increasing longevity, slowing functional decline, improving clinical markers, improving quality of life, or some other outcome domain? Many existing studies of new interventions lack consensus on the relevant health outcomes. It is nearly impossible to summarise what has been learned from prior studies that measure different outcomes. Thus, while comparative effectiveness reviews may set a higher bar for innovation, they also offer the potential for establishing common frames of reference for evaluating whether something new is better than the options that exist and for comparing existing options. Agreement on the preferred outcomes could stimulate innovation that better meets the preferences and needs of patients.
Patients, of course, are not homogeneous and comparative effectiveness reviews are frequently described in a way that implies that the results are always simple: a treatment works or it doesn't work for everyone. In fact, most clinical research produces much more nuanced results. It is not uncommon to find that one intervention works well for one group of patients and not as well for another group. Comparative effectiveness reviews, particularly those involving meta-analysis, can shed light on the optimal therapy for different subgroups of patients in a way that the original studies may not have done because of greater sample sizes from combining studies. Having such information available on subgroups opens up the opportunity for treatments to be better tailored to individual patients' needs.
Efforts to limit access to treatments that do not work well for some groups of patients (e.g. women whose tumours are not hormone sensitive) are defined by some as constituting unacceptable rationing. Since resources to pay for health services are limited even in the United States, rationing already exists in some form. Further, when public dollars are being used to pay for treatments, it is reasonable to place greater limits on how those dollars can be used than when private dollars are the source of payment. Today factors like wealth, education, ability to work the system and where you live work implicitly to ration access to care. The information generated through comparative effectiveness reviews offers the opportunity to make resource allocation decisions based on which patients are likely to experience the greatest improvement in health from the treatment. When public dollars are involved, this would seem to be a superior method for allocating resources than wealth or education or geography.
Although comparative effectiveness reviews do not typically consider costs, two other methods do: cost/benefit analysis and cost-effectiveness analysis. Cost/benefit analysis (CBA) is designed to answer questions about whether a proposed intervention confers a net economic benefit to society. CBA requires an explicit calculation of the monetary value of a human life, which is what makes many people nervous about the implications of the analysis. One could view the use of CBA as holding innovators responsible for developing technologies or processes that make a net positive contribution to society. Cost-effectiveness analysis (CEA) answers the question of the most efficient approach to accomplishing a goal. It does not require placing a value on a human life and from that perspective may be less threatening. Using CEA to allocate public resources efficiently would seem to be consistent with the obligation of stewards of public monies. Because rationing is already happening, CBA and CEA offer a more systematic basis for making some of those choices.
If comparative effectiveness reviews lead to better decisions about how to allocate limited resources they might improve the health of the population. But the most likely outcome is that we will have more information but no system capable of using that information well. I led a study at RAND that found that American adults get 55% of recommended care for the leading causes of death and disability. We found that people—even those with good insurance, high incomes and advanced education—were not getting the care they needed. These were areas where experts agreed, as reflected in practice guidelines, which increasingly use evidence from comparative effectiveness reviews.
Information on comparative effectiveness by itself does not change the system that produces these results. Putting more and better information into that system will help some, but it does not change the flawed system we have today. While we can all point to specific examples of new insights from such work that has led to changes in practice, there is no systematic assessment of the circumstances under which practice does or does not change in response to such reviews. As we move forward with comparative effectiveness reviews, it will be useful to examine how to ensure that the information is routinely and rapidly integrated into practice. And that, indeed, would be an important innovation.
Elizabeth McGlynn, Associate Director of Health at the RAND Corporation, oversees strategic development, external dissemination and communications of the results of the RAND health research portfolio. She is an internationally known expert on methods for assessing and reporting on the quality and efficiency of health care delivery at the physician, medical group, hospital, health plan, regional and national level. She is co-leading RAND Health's COMPARE initiative, which has developed a comprehensive framework and methods for evaluating a wide range of health policy proposals being considered at the federal and state level as well as by the private sector. She is a member of the Institute of Medicine and serves on a variety of national advisory committees. She is the vice-chair of the board of AcademyHealth, the professional association for health services researchers. She is also the vice-chair of the board of Providence-Little Company of Mary Hospital Service Area in Southern California. She serves on the editorial boards for Health Services Research and Milbank Quarterly and is a reviewer for many leading journals.
This op-ed was part of a moderated debate on Economist.com.