Analysis of Comparative Effectiveness

Comparative effectiveness research examines the degree to which alternative treatments for the same health problem produce equivalent or different health outcomes. The products of comparative effectiveness research can be used in a variety of ways, including to provide information to physicians and patients in choosing appropriate treatments, as well as input into insurance benefit design, coverage determination, and payment.

These are the nine performance dimensions against which we measured Comparative Effectiveness:

Spending

Under some circumstances, using comparative effectiveness research might reduce overall spending, but there is no clear evidence on the direction and magnitude of the relationship:

  • Theory suggests that, under some circumstances, use of comparative effectiveness research might decrease overall spending. Read more below
  • The effects of comparative effectiveness research on health care spending have not been studied, and they are inherently difficult to measure. Read more below
  • The findings of comparative effectiveness studies will determine whether the results could lead to decreased costs. The extent to which this will occur depends on studies being designed in a way that permits valid comparisons of alternatives and on finding clear opportunities for cost savings. Read more below

Theory suggests that, under some circumstances, use of comparative effectiveness research might decrease overall spending.

Although empirical evidence is limited, theory suggests that the use of comparative effectiveness research might decrease overall health spending. Spending reductions would require: first, the development and assembly of objective, unbiased evidence on the relative effectiveness of various treatments; second, a clear set of results that point to a clinically superior and less costly choice of intervention; and third, the use of that information to change service utilization by providers and consumers of health care. Comparative effectiveness research could result in improvements in the value of services provided without meeting all three of these conditions, but these improvements might be achieved at increased levels of spending.

At least in the near term, any reductions in spending would be offset by the up–front costs associated with generating, coordinating, and disseminating the research findings. The American Recovery and Reinvestment Act of 2009 (ARRA) allocated $1.1 billion in new federal funding for comparative effectiveness research, adding to the amount currently spent by the Centers for Medicare and Medicaid Services, the Agency for Healthcare Research and Quality (AHRQ), the Veterans Administration, the National Institutes of Health (NIH), and the Office of the National Coordinator for Health Information Technology (Tunis et al., 2007). These funds could be used to support studies that generate new evidence or to synthesize and review existing evidence from various sources. Studies that generate new evidence are much more expensive than syntheses of existing evidence. Head–to–head clinical trials conducted recently by NIH (2007) averaged $77.8 million, ranging from $12 million to $176 million. In contrast, review and synthesis of existing evidence average approximately $50,000 to $300,000 per study, depending on the scope (AHRQ, 2007). New research can be more specifically tailored to questions about preferred approaches to treatment, whereas syntheses may be limited by the questions and data collected in prior research.

In the longer term, the net effect on spending depends on the pattern of results from new comparative effectiveness research and on how that evidence is used to change practice. Decreased spending could occur through reduced utilization of services that are conclusively shown to be either ineffective — or more expensive and equally or less effective — than treatment alternatives. The way in which this policy option is implemented determines the extent to which it would affect spending. Key questions include the following: (1) Will information on costs be used or only information on clinical effectiveness? (2) How strong will the incentives in payment or coverage policies be for the use of less costly, therapeutically equivalent services?

Strategies that attempt to influence medical practice using comparative effectiveness research fall along a spectrum that ranges from dissemination to financial penalties for choosing less effective options. Approaches that use stronger incentives are more likely to have a significant impact on utilization of services, but they are also more likely to engender a backlash from stakeholders, including health care providers, who could face revenue reduction, and from patients, who could face higher cost sharing. The Federal Coordinating Council, established by the recently enacted ARRA, is expressly prohibited from setting coverage mandates or reimbursement policies (U.S. Congress, 2009).

One way to use the results of comparative effectiveness research is to disseminate the information to patients and providers with the goal of influencing medical decisionmaking. However, information dissemination alone, without the use of other incentives or mechanisms to change behavior, may not be sufficient to significantly change practice. For example, the Antihypertensive and Lipid–Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), a large randomized clinical trial, compared diuretics, Angiotensin Converting Enzyme (ACE) inhibitors, calcium channel blockers, and alpha blockers for treatment of hypertension (Pollack, 2008). Diuretics were found to be more effective than the alternatives and less expensive.

However, the results had only a small effect on prescribing patterns, partly because of changes in standards of practice that occurred during the course of the study, with new drugs and drug combinations introduced, and marketing by pharmaceutical companies.

"Shared decisionmaking" (SDM) is another approach to incorporating evidence in decisions on treatment alternatives. SDM is a means through which patients and their care providers become active participants in the process of communication and decisionmaking about their care (Charles, Whelan, and Gafni, 1999; Charles, Gafni, and Whelan, 1999). The information that drives shared decisionmaking could be derived from comparative effectiveness research. Research on the use of patient decision aids yields uncertain results regarding the aids' effects on cost, but generally shows improvement on other measures, such as knowledge and decision satisfaction (O'Connor, 2009). The Congressional Budget Office concluded from an evidence review that decision aids reduce use of aggressive surgical procedures without affecting health outcomes (CBO, 2008). The CBO also concluded that use of such aids on a broader scale could reduce health care spending. However, the CBO was unable to develop a quantitative estimate of the effects of greater use of shared decisionmaking on Medicare expenditures.

Changes to benefit design, payment, and coverage are among the applications most likely to influence the impact of comparative effectiveness research on spending. Results of studies could be incorporated into the design of health benefit packages, with less cost-effective services associated with higher cost sharing, in order to increase use of cost-effective services. Reimbursement could be changed through methods such as "reference pricing," in which a price is determined for a category of therapies (usually the least costly in a group of treatments deemed equivalently effective) and all therapies in that category are reimbursed at the reference price. Consumers who choose higher priced treatments would face higher out–of–pocket costs. Another alternative is to offer bonus payments to providers who deliver cost–effective treatments.

The strongest way to affect practice patterns would be through coverage determinations. Both public and private sector insurers use clinical effectiveness information in making coverage decisions, but historically the process has been opaque, and little is known about the net impact on spending (Rowe, Cortese, and McGinnis, 2006). If the empirical evidence clearly shows that a service is ineffective, then the service is certainly a candidate for a noncoverage decision. However, noncoverage of services that have health benefits, particularly services that are more clinically effective but less cost–effective than alternatives, could be seen as limiting access to care. Another possible approach is to implement coverage eligibility through "step therapy," in which several treatment options might be covered but the least costly of equivalently effective options must be tried first.

The effects of comparative effectiveness research on health care spending have not been studied, and they are inherently difficult to measure.

There is little empirical evidence on the effect of comparative effectiveness research on spending. It is inherently difficult to isolate the impact of comparative effectiveness in the face of a myriad of other factors that influence health care spending. Other countries use comparative effectiveness more prominently in coverage decisionmaking than does the United States, and their experiences could provide some insights into potential effects in the United States, although major structural differences between health systems make these comparisons difficult (Wilensky, 2006). The United Kingdom's National Institute for Health and Clinical Excellence (NICE) is often cited as a model in discussions of potential uses of comparative effectiveness research in the United States. NICE makes recommendations to the British National Health Service (NHS) on coverage for certain technologies or treatments based on cost–effectiveness analysis (Raftery, 2001). Implantable cardiac defibrillators, drug treatments for osteoporosis, and, most controversially, drugs for the treatment of multiple sclerosis and Alzheimer's disease are examples of technologies that NICE has recommended against because of their high cost relative to health benefits (Pearson and Littlejohns, 2007). However, most treatments that have been reviewed have been recommended for coverage (Devlin and Parkin, 2004). The consequence of NICE approval has meant increased costs for NHS because approval results in a mandate for funding new treatments. NICE has tended to focus on reviews of new technologies much more than "disinvestment" (reviewing existing therapies for evidence that they are ineffective or low value and eliminating coverage for those services). New treatments are approved more frequently than old, ineffective treatments are removed. (By September 2006, however, NHS had formally empowered NICE to focus on reducing health spending. See discussion in Pearson and Rawlins, 2005.) Overall, NICE reviews may have increased the average cost–effectiveness of treatments covered by NHS, but there is no evidence that total spending has been reduced or that the rate of increase in cost growth has been lowered. Part of this pattern may stem from the political context in which NICE was introduced. Over the period that NICE has been operational, NHS has aimed to increase spending to improve the quality and accessibility of health services, and health spending in the United Kingdom has increased rapidly (Marmor, Oberlander, and White, 2009).

Because of the lack of empirical evidence, both the current and former directors of the CBO have indicated that estimating the potential for spending reductions that would accrue from the creation of a national level entity to foster comparative effectiveness research is difficult at best because of the myriad assumptions such an estimate requires (Orszag, 2007a; Elmendorf, 2009). In December, 2007, the CBO generated estimates of the impact of a legislative proposal before Congress that would establish a center to conduct and disseminate comparative effectiveness research based within AHRQ (Orszag, 2007b). The legislation called for an infusion in comparative effectiveness research funding ($100 million per year through 2010 to just under $400 million per year through 2019). CBO assumed that the results of the research would generate modest practice changes and estimated that total federal spending on health care would be reduced by less than 1 percent over the ten–year period. Federal spending for comparative effectiveness research would eventually be offset by cost savings and revenue increases, but probably not until at least the end of the ten–year period (CBO, 2008). It is important to note that in calculating spending reduction estimates, the CBO assumed that the research results were not tied to such policy actions as payment or coverage determinations. The CBO estimates indicate that, without these policy actions, savings from comparative effectiveness research would likely be modest.

The findings of comparative effectiveness studies will determine whether the results could lead to decreased costs. The extent to which this will occur depends on studies being designed in a way that permits valid comparisons of alternatives and on finding clear opportunities for cost savings.

Types of Studies

In a 2009 AcademyHealth roundtable on comparative effectiveness research, moderator Dr. Sean Tunis of the Center for Medical Technology Policy and colleagues suggested four categories of tools or methods that are used for comparative effectiveness research. Arranged in order of decreasing cost and complexity, they are the following: (1) prospective clinical studies, which include clinical registries, head–to–head trials, pragmatic trials, and adaptive trials; (2) retrospective studies using administrative or electronic health record data; (3) decision models with or without cost information; and (4) systematic reviews (AcademyHealth, 2009)

Clinical trials (such as those used to establish the safety and efficacy of new pharmaceuticals) determine clinical efficacy by comparing treatments, sometimes against a control group or the standard of care, using a set protocol. These methods provide a strong level of evidence, but clinical trials have design issues that make them difficult to use in making the types of decisions that would affect health spending. Clinical trials are usually placebo controlled and therefore do not compare new products to existing treatments. Those that do provide such information, while valuable, are very expensive and time consuming to conduct. They are performed in tightly controlled populations and are usually for purposes other than coverage decisionmaking. The characteristics of participants in clinical trials are often different from the population for which the coverage decision is being made. Coverage decisions are generally made for much broader population groups. Users of clinical trials results for this purpose are required to make determinations about the extent to which the trial results can be generalized to much broader populations. In addition, clinical trials may not incorporate factors that are important to treatment decisions, such as cost, quality of life, or patient preferences (ALLHAT Officers and Coordinators, 2002). For providers seeking to apply the results to their specific patient populations, or health insurers seeking to determine whether to cover a new medication, controlled trial results may not be useful in and of themselves.

Alternative study designs could provide information that overcomes some of these shortcomings. "Pragmatic" trials measure the effectiveness of treatments in typical medical practice settings. "Adaptive" trials allow for modification of the trial based on the results of interim analyses. Another type of prospective study uses data from clinical registries, which are used to track the effectiveness of treatments in defined patient populations. One way the use of registries can be encouraged is through a policy of "coverage with evidence development," which is used by CMS in some coverage determinations (Tunis and Pearson, 2006). Under this type of policy, coverage of promising new treatments could be linked to a requirement that patients participate in a registry (the policy is also used to require participation in clinical trials).

Retrospective observational studies using existing data sets, such as insurance claims, are often used when clinical trials are not feasible. From 2004 to 2007, CMS used evidence from such studies 82 percent of the time in making coverage determinations (Neumann et al., 2008). These studies can add to the evidence base at lower expense than prospective clinical trials, but they are not as rigorous for two main reasons. First, since patients are not randomized to treatments, it is difficult to distinguish between the effect of the treatment and other explanations for differences in outcomes. Second, readily available data, such as insurance claims, include limited clinical information, which may affect the outcomes that can be evaluated or the ability to adjust for differences in the case mix of patients receiving one treatment versus another. In some instances, these data may be linked to other sources of information (e.g., claims data have been linked to the National Death Index to study mortality) to improve the utility of the data. Other sources of clinical information, such as medical records, are more difficult and costly to collect. In the future, it is possible that expanded use of electronic health records will facilitate the collection of clinical data.

Evidence of Cost Savings

Another factor in quantitatively estimating the relationship between comparative effectiveness research and spending is that we cannot predict how many treatments will be found that have equally or more effective, less costly alternatives. Studies may determine that one treatment is more clinically effective than another, that two treatments are equivalent, or that the evidence is mixed. If costs are also evaluated, the studies may determine that a more effective treatment is also less costly than an alternative. However, if the more effective treatment is equally or more costly, increased use of the treatment would not lead to overall spending reductions (although value may increase). A recent summary of cost–effectiveness studies found that about 20 percent of treatments and preventive measures save money compared to an alternative; 4 to 6 percent increase costs and lead to worse outcomes; and 75 percent confer a benefit and increase costs (Cohen, Neumann, and Weinstein, 2008). A major challenge for stakeholders is to determine what constitutes an unacceptably high cost, particularly given that few health care services increase costs without conferring at least some benefit.

References

AcademyHealth, A First Look at the Volume and Cost of Comparative Effectiveness Research in the United States, Washington, D.C., June 2009. As of August 17, 2009: http://www.academyhealth.org/files/FileDownloads/AH_Monograph_09FINAL7.pdf

Agency for Healthcare Research and Quality (AHRQ), Evidence–based Practice Centers (EPCs): Request for Proposals, Rockville, Md., May 2007. As of August 12, 2009: http://www.ahrq.gov/fund/contarchive/rfp0710021.htm

ALLHAT [Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial] Officers and Coordinators, "Major Outcomes in High–Risk Hypertensive Patients Randomized to Angiotensin–Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic," Journal of the American Medical Association, Vol. 288, 2002, pp. 2981–2997.

Charles C, Gafni A, Whelan T, "Decision Making in the Physician–Patient Encounter: Revisiting the Shared Treatment Decisionmaking Model, Social Science & Medicine, Vol. 49, 1999, pp. 651–661.

Charles C, Whelan T, Gafni A, "What Do We Mean by Partnership in Making Decisions About Treatment?" BMJ, Vol. 319, 1999, pp. 780–782.

Cohen JT, Neumann PJ, Weinstein MC, "Perspective—Does Preventive Care Save Money? Health Economics and the Presidential Candidates," New England Journal of Medicine, Vol. 358, No. 7, 2008, p. 3.

Congressional Budget Office (CBO), Budget Options Volume I: Health Care, Washington, D.C.: U.S. Congress, Pub. No. 3185, December 2008. As of August 12, 2009: http://www.cbo.gov/ftpdocs/99xx/doc9925/12-18-HealthOptions.pdf

Devlin N, Parkin D, "Does NICE Have a Cost–Effectiveness Threshold and What Other Factors Influence Its Decisions? A Binary Choice Analysis," Health Economics, Vol. 13, No. 5, 2004, pp. 437–452.

Elmendorf DW, "Options for Controlling the Cost and Increasing the Efficiency of Health Care," Statement Before the Subcommittee on Health Committee on Energy and Commerce, U.S. House of Representatives, Washington, D.C., March 10, 2009. As of August 12, 2009: http://www.cbo.gov/ftpdocs/100xx/doc10016/Testimony.1.1.shtml

Marmor T, Oberlander J, White J, "The Obama Administration's Options for Health Care Cost Control: Hope vs. Reality," Annals of Internal Medicine, Vol. 150, No. 7, 2009, pp. 485–489.

National Institutes of Health (NIH), Fact Sheet: Research into What Works Best, Bethesda, Md.: Department of Health and Human Services, National Institutes of Health, 2007.

Neumann PJ et al., "Medicare's National Coverage Decisions for Technologies, 1999-2007," Health Affairs, Vol. 27, No. 6, 2008, pp. 1620–1631.

O'Connor AM, Rostom A, Fiest V, Tetroe J, Entwistle V, Llewellyn–Thomas H, Holmes–Rovner M, Barry M, Jones J, "Decision Aids for Patients Facing Health Treatment or Screening Decisions: Systematic Review," BMJ, Vol. 319, 2009, pp. 731–734.

Orszag P, "Health Care and the Budget: Issues and Challenges for Reform," Statement Before the Committee on the Budget, U.S. Senate, Washington, D.C., June 21, 2007a. As of August 12, 2009: http://www.cbo.gov/ftpdocs/82xx/doc8255/06-21-HealthCareReform.pdf

Orszag P, Research on the Comparative Effectiveness of Medical Treatments: Issues and Options for an Expanded Federal Role, Washington, D.C.: U.S. Congress, Congressional Budget Office, December 2007b. As of August 12, 2009: http://www.cbo.gov/ftpdocs/88xx/doc8891/12-18-ComparativeEffectiveness.pdf/

Pearson S, Littlejohns P, "Reallocating Resources: How Should the National Institute for Health and Clinical Excellence Guide Disinvestment Efforts in the National Health Service?" Journal of Health Services Research & Policy, Vol. 12, No. 3, 2007, pp. 160—165.

Pearson SD, Rawlins MD, "Quality, Innovation, and Value for Money: NICE and the British National Health Service," Journal of the American Medical Association, Vol. 294, No. 20, 2005, pp. 2618—2622.

Pollack A, "The Minimal Impact of a Big Hypertension Study," New York Times, November 27, 2008, p. B1.

Raftery J, "NICE: Faster Access to Modern Treatments? Analysis of Guidance on Health Technologies," BMJ, Vol. 323, 2001, pp. 1300–1303.

Rowe JW, Cortese DA, McGinnis M, "The Emerging Context for Advances in Comparative Effectiveness Assessment," Health Affairs, Vol. 25, No. 6, 2006, pp. w593–w595.

Tunis, S, Carino TV, Williams RD, and Bach PB, "Federal Initiatives to Support Rapid Learning About New Technologies," Health Affairs, Web Exclusives, Vol. 26, No. 2, 2007, pp. w140–w149. Published online January 26, 2007. As of August 12, 2009: http://dx.doi.org/10.1377/hlthaff.26.2.w140

Tunis SR, Pearson SD, "Coverage Options for Promising Technologies: Medicare's 'Coverage with Evidence Development,'" Health Affairs, Vol. 25, No. 5, September 1, 2006, pp. 1218–1230.

U.S. Congress, 111th Cong., 1st Sess., American Recovery and Reinvestment Act of 2009, Washington, D.C., H.R. 1, January 6, 2009. As of August 12, 2009: http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=111_cong_bills&docid=f:h1enr.pdf

Wilensky GR, "Developing a Center for Comparative Effectiveness Information," Health Affairs, Web Exclusives, Vol. 25, No. 6, 2006, pp. w572–w585. Published online November 7, 2006. As of August 11, 2009: http://content.healthaffairs.org/cgi/content/abstract/25/6/w572

Back to top

Consumer Financial Risk

Theory suggests that use of comparative effectiveness information could reduce consumer financial risk, but there are no empirical studies of this relationship:

  • Theory suggests that using comparative effectiveness research might decrease consumer financial risk if consumers are more likely to use lower cost, but similarly effective, treatment options. Read more below
  • No empirical studies exist that specifically focus on this policy option. Read more below

Theory suggests that using comparative effectiveness research might decrease consumer financial risk if consumers are more likely to use lower cost, but similarly effective, treatment options.

For comparative effectiveness research to change consumer financial risk, the information would need to lead to either lower out–of–pocket expenses or reductions in premiums. To reduce out–of–pocket expenses, consumers would need to use the results of comparative effectiveness studies to choose less expensive treatments over more expensive ones (e.g., generic drugs over name brand ones). The extent to which this would occur depends on a number of factors, including consumers' access to and use of the information to choose more cost–effective care, on their providers incorporating information into decisions they make on patients' behalf, and on health plans' use of information to design benefit packages with lower out–of–pocket expenses for cost–effective care. Health plans could directly influence consumer spending through such mechanisms as value based insurance design (VBID), in which cost sharing structures are altered to provide incentives to use effective care. For example, a hypertensive patient who would benefit from a particular type of antihypertensive medication might be offered a reduced co payment for that drug.

For all of the reasons discussed in Spending, the impact of additional comparative effectiveness research on consumer financial risk is uncertain but unlikely to make a significant difference over the next decade.

No empirical studies exist that specifically focus on this policy option.

The substantial increase in investment in comparative effectiveness research from the American Recovery and Reinvestment Act (ARRA) legislation offers an opportunity to design studies to examine the impact of expanded information on consumer financial risk (and other outcomes).

Back to top

Waste

Theory suggests that using comparative effectiveness information could reduce waste, but there are no empirical studies of this relationship:

  • Theory suggests that comparative effectiveness research could reduce clinical waste in the health care system if its use resulted in the adoption of less costly and more effective medical interventions. Read more below
  • No empirical studies focus specifically on this policy option. Read more below

Theory suggests that comparative effectiveness research could reduce clinical waste in the health care system if its use resulted in the adoption of less costly and more effective medical interventions.

We have identified three potential sources of waste: clinical, administrative, and operational. The availability of comparative effectiveness research, if it is primarily focused on treatment interventions, has the greatest potential to affect clinical waste. In a comprehensive overview of waste in the health care system, waste in clinical care was defined as the production of "services that provide marginal or no health benefit over less costly alternatives." This includes services that cause specific "detrimental health effects." It also includes services with "small positive health effects, compared with less costly alternatives" (Bentley et al., 2008, p. 644).

The effect of comparative effectiveness research on waste, therefore, depends on the way in which the information is used. If it serves to "weed out" procedures or treatments for which there is clearly no benefit, the effect on waste is likely to be low, since few treatments will be found to have no benefit under any circumstances. Conversely, if the information is used to make decisions that favor less costly but similarly effective alternatives, the effect on waste could be large.

Prior RAND research (McGlynn and Brook, 2001) has found that nearly one–third of surgical and medical procedures that were studied were clinically inappropriate or of questionable value in improving health outcomes. If a mechanism were used to reduce this type of inappropriate use, we might observe substantial reductions in clinical waste.

No empirical studies focus specifically on this policy option.

There is little empirical evidence to indicate how or how much comparative effectiveness information would affect waste. Studies such as those conducted by Fisher et al. (2003a and 2003b) have found evidence that variations in health spending exist that are unexplained by demographics, patient needs, or quality of care. Variations are greater for treatments with more "gray area" in the evidence base on their effective use. However, it is unclear how much of the variation between geographic regions is due to wasteful, inappropriate care, or how much of the variation could be reduced through application of new comparative effectiveness evidence. Prior research has shown no relationship between the rates of procedure use (variation) and the appropriateness of procedures (Chassin, Kosecoff, and Park, 1987; Siu, Leibowitz, and Brook, 1988; McGlynn et al., 1994; Gray et al., 1990; Pilpel et al., 1992/93; Siu et al., 1986).

References

Bentley TG, Effros RM, Palar K, Keeler EB, "Waste in the U.S. Health Care System: A Conceptual Framework," Milbank Quarterly, Vol. 86, No. 4, December 2008, pp. 629–659.

Chassin MR, Kosecoff J, Park RE, "Does Inappropriate Use Explain Geographic Variations in the Use of Health Care Services? A Study of Three Procedures," Journal of the American Medical Association, Vol. 258, 1987, pp. 2533–2537.

Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL, "The Implications of Regional Variations in Medicare Spending. Part 1: The Content, Quality, and Accessibility of Care," Annals of Internal Medicine, Vol. 138, 2003a, pp. 273–287.

Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL, "The Implications of Regional Variations in Medicare Spending. Part 2: Health Outcomes and Satisfaction with Care," Annals of Internal Medicine, Vol. 138, 2003b, pp. 288–298.

Gray D, Hampton JR, Bernstein SJ, et al., "Clinical Practice: Audit of Coronary Angiography and Bypass Surgery," Lancet, Vol. 335, 1990, pp. 1317–1320.

McGlynn EA, Brook RH, "Ensuring Quality of Care," in RJ Anderson, TH Rice, and GF Kominski, eds., Changing the U.S. Health Care System: Key Issues in Policy and Management, 2nd ed., San Francisco: Jossey–Bass, 2001.

McGlynn EA, Naylor CD, Anderson GM, et al., "Comparison of the Appropriateness of Coronary Angiography and Coronary Artery Bypass Graft Surgery Between Canada and New York State," Journal of the American Medical Association, Vol. 272, 1994, pp. 934–940.

Pilpel D, Fraser GM, Kosecoff J, et al., "Regional Differences in Appropriateness of Cholecystectomy in a Prepaid Health Insurance System," Public Health Reviews, Vol. 20, 1992/93, pp. 61–74.

Siu AL, Leibowitz A, Brook RH, "Use of the Hospital in a Randomized Trial of Prepaid Care," Journal of the American Medical Association, Vol. 259, 1988, pp. 1343–1346.

Siu AL, Sonnenberg FA, Manning WG, et al., "Inappropriate Use of Hospitals in a Randomized Trial of Health Insurance Plans," New England Journal of Medicine, Vol. 315, 1986, pp. 1259–1266.

Back to top

Reliability

Theory suggests that using comparative effectiveness research could increase reliability, but there are no empirical studies of a link:

  • No empirical studies focus specifically on this policy option, and it is uncertain what effect comparative effectiveness studies conducted with new research funding will have on reliability.Read more below

No empirical studies focus specifically on this policy option, and it is uncertain what effect comparative effectiveness studies conducted with new research funding will have on reliability.

No literature exists on the relationship between comparative effectiveness and reliability of care. The evidence base developed for comparative effectiveness research could, in theory, be used to support a variety of activities that would contribute to improved reliability. Research results could provide foundational material for new or updated practice guidelines and patient education materials that could encourage the use of appropriate care. Studies could lead to development of better performance measures for applications such as internal quality improvement and pay for performance. Existing pay for performance programs provide financial incentives to providers who deliver care that is consistent with guidelines, but the metrics used rarely distinguish between acceptable alternative treatments.

On June 30, 2009, the Institute of Medicine (IOM) delivered a report to the Department of Health and Human Services that provided initial priorities for American Recovery and Reinvestment Act (ARRA) funded comparative effectiveness research (IOM, 2009). The IOM committee took a broad approach to the definition of comparative effectiveness research and to the specific priorities recommended to the department. Half of the 100 priorities have health delivery as the primary or secondary research focus. Thus, assuming the department follows the recommendations set forth by the IOM, new information that could lead to improvements in reliability will be generated. It is premature to speculate on the precise effect these studies will have on reliability. AHRQ also funds research that addresses questions that might more directly affect reliability, but there is relatively little research in this area, and its translation into practice remains at an early stage.

Reference

Institute of Medicine (IOM), Committee on Comparative Effectiveness Research Prioritization, Initial National Priorities for Comparative Effectiveness Research, Washington, D.C.: National Academies Press, 2009.

Back to top

Patient Experience

If comparative effectiveness research is incorporated into shared decisionmaking, evidence suggests that the experience of patients would improve:

  • A review of the literature suggests that patient experience would improve if comparative effectiveness research were incorporated into shared decisionmaking (SDM) aids. Read more below
  • The magnitude of the effect is impossible to predict because of the number of assumptions that must be made. Read more below

A review of the literature suggests that patient experience would improve if comparative effectiveness research were incorporated into shared decisionmaking aids.

Patients could use comparative effectiveness research in a variety of ways that could potentially improve their experience with care. The strongest evidence for how comparative effectiveness information could improve patient experience is through use of a method called "shared decisionmaking" (SDM). SDM is a two way, doctor patient communication that explicitly incorporates patient preferences for health states, clinical options, and outcomes in treatment decisions (Charles, Whelan, and Gafni, 1999; Charles, Gafni, and Whelan 1999). The information that drives SDM could be derived from comparative effectiveness research.

The literature on SDM suggests that incorporating it into the interaction between patient and care provider improves patients' experience with care. Joosten et al. (2008) conducted a review of published randomized studies that compared SDM with a control for treatment decisionmaking. Outcomes of interest included treatment adherence, patient satisfaction, well-being and/or quality of life. Of the eleven randomized controlled trials identified, three involved cancer treatment, two involved mental health conditions, and the remainder involved peptic ulcers, ischemic heart disease, hormone replacement, dentistry, and benign prostatic hypertrophy. Five of the studies showed no difference in key outcomes between SDM and a control strategy. One clinical trial showed positive long term effects, and five clinical trials, including the studies that involved both mental health conditions, showed a positive effect of SDM on outcome measures.

Joosten and colleagues' review of published studies suggests that SDM's impact on patient experience is most pronounced when it is incorporated into long term communication strategies, as opposed to a one time communication, and for patients with chronic illness. Several observational studies have also shown that SDM is associated with higher patient satisfaction in general, and/or satisfaction with communication, for conditions such as end of life care, major depression, and early stage breast cancer (White et al., 2007; Swanson et al, 2007; Waljee et al, 2007).

The magnitude of the effect is impossible to predict because of the number of assumptions that must be made.

In order to be used for SDM and to have an impact on patient experience, the results of comparative effectiveness studies need to be translated into decision aids. These decision aids would then be used by providers and patients to facilitate the communication process. SDM is currently rarely used in treatment decisions; it is unclear whether patients and providers will adopt it more broadly.

Patients generally express a preference for participation in the decisionmaking process (Frosch and Kaplan, 1999; Kaplan and Frosch, 2005). However, patients' preferences for SDM have been shown to vary by sociodemographic characteristics. Studies have shown that younger patients have expressed stronger preferences for SDM than have older patients (Deber et al., 2007; Arora and McHorney, 2000; Krupat et al., 2001).

Physician adoption of SDM into practice also varies. Communicating the amount of information that SDM requires may add considerably to the time spent in consultation with patients. Characteristics of physician practices that have been shown to be associated with greater use of SDM methods include lower volume, longer office visits, and doctor training in either primary care or interviewing skills (Kaplan, 1995). Wennberg et al. (2007) propose incorporating SDM into pay for performance through the Medicare program in order to advance its widespread adoption.

References

Arora NK, McHorney CA, "Patient Preferences for Medical Decision Making: Who Really Wants to Participate?" Medical Care, Vol. 38, 2000, pp. 335–341.

Charles C, Gafni A, Whelan T, "Decision Making in the Physician–Patient Encounter: Revisiting the Shared Treatment Decisionmaking Model," Social Science & Medicine, Vol. 49, 1999, pp. 651–661.

Charles C, Whelan T, Gafni A, "What Do We Mean by Partnership in Making Decisions About Treatment?" BMJ, Vol. 319, 1999, pp. 780–782.

Deber RB, Kraetschmer N, Urowitz S, Sharpe N, "Do People Want to Be Autonomous Patients? Preferred Roles in Treatment Decision–Making in Several Patient Populations," Health Expectations, Vol. 10, No. 3, September 2007, pp. 248–258.

Frosch DL, Kaplan RM, "Shared Decision Making in Clinical Medicine: Past Research and Future Directions," American Journal of Preventive Medicine, Vol. 17, 1999, 285–294.

Joosten EAG, DeFuentes–Merillas L, deWeert GH, Sensky T, van der Staak CPF, deJong CAJ, "Systematic Review of the Effects of Shared Decision–Making on Patient Satisfaction, Treatment Adherence and Health Status," Psychotherapy and Psychosomatics, Vol. 77, 2008, pp. 219–226.

Kaplan RM, Frosch DL, "Decision Making in Medicine and Health Care," Annual Reviews in Clinical Psychology, Vol. 1, 2005, pp. 525–256.

Kaplan SH, Gandek B, Greenfield S, Rogers WH, Ware JE Jr., "Patient and Visit Characteristics Related to Physicians?Participatory Decision–Making Style. Results from the Medical Outcomes Study," Medical Care, Vol. 33, 1995, pp. 1176–1187.

Krupat E, Bell RA, Kravitz RL, Thom D, Azari R, "When Physicians and Patients Think Alike: Patient–Centered Beliefs and Their Impact on Satisfaction and Trust," Journal of Family Practice, Vol. 50, 2001, pp. 1057–1062.

Swanson KA, Bastani R, Rubenstein LV, Meredith LS, and Ford DE, "Effect of Mental Health Care and Shared Decision Making on Patient Satisfaction in a Community Sample of Patients with Depression," Medical Care Research and Review, Vol. 64, No. 4, August 2007, pp. 416–430.

Waljee JF, Rogers MA, Alderman AK, "Decision Aids and Breast Cancer: Do They Influence Choice for Surgery and Knowledge of Treatment Options?" Journal of Clinical Oncology, March 20, 2007, Vol. 25, No. 9, pp. 1067–1073.

Wennberg JE, O'Connor AM, Collins ED, Weinstein JN, "Extending the P4P Agenda, Part I: How Medicare Can Improve Patient Decision Making and Reduce Unnecessary Care," Health Affairs, Vol. 26, No. 6, 2007, pp. 1564–1574.

White DB, Braddock CH, Bereknyei S, Curtis JR, "Toward Shared Decision Making at the End of Life in Intensive Care Units: Opportunities for Improvement," Archives of Internal Medicine, Vol. 167, March 12, 2007.

Back to top

Health

Theory suggests that comparative effectiveness research could improve health but only if its use drives payers, providers, and patients toward more beneficial treatment options:

  • Theory suggests that comparative effectiveness research could improve health if only relatively more beneficial treatment options are used. Read more below
  • Unless comparative effectiveness research produces information showing the superior effectiveness of some treatments over others, and unless the research results lead to adoption of more effective care, health will not be improved. Read more below
  • No empirical studies specifically focus on the relationship between evidence-based coverage and health outcomes. Read more below

Theory suggests that comparative effectiveness research could improve health if only relatively more beneficial treatment options are used.

In theory, if comparative effectiveness information were widely available, transparent, and used consistently by payers, providers, and patients to affect treatment decisions, then health would be optimized because only relatively more effective treatments would be used. If comparative effectiveness information led to decreased use of procedures that carry more health risks relative to alternatives (e.g., decreased endoscope use in favor of a longer trial of antireflux medication), overall health might be improved. The magnitude of the net effect that comparative effectiveness research would have on health is difficult to determine and depends on the number of health conditions for which alternative options exist, and the extent to which the information is used in decisionmaking.

Unless comparative effectiveness research produces information showing the superior effectiveness of some treatments over others, and unless the research results lead to adoption of more effective care, health will not be improved.

Comparative effectiveness research will likely demonstrate the equivalence of outcomes among compared treatments or that new treatments make incremental gains in health over existing ones. But unless comparative effectiveness research illustrates the substantial superiority of some treatments over others, and the results lead to widespread adoption of more effective care, the net effect on health is likely to be neutral. For example, if two medications were found to have equivalent health benefits and risks—but different costs—patients' health would not change by taking the less–costly instead of the more–costly drug. However, patients may be more likely to adhere to a regimen of less–costly drugs because they are more affordable. One study demonstrated a relationship between the level of patient cost sharing and initiation of medical therapy for patients with chronic illness (Solomon et al., 2009). Other studies have shown a relationship between the level of cost sharing and adherence to recommended medications (Goldman, Joyce, and Zheng, 2007; Goldman et al., 2004; Joyce et al., 2002).

In some circumstances, the comparative effectiveness of a particular intervention is not consistent across subgroups. For example, orthopedic surgical procedures may significantly improve functioning and quality of life in younger patients. However, because of the risk of surgical and postoperative complications, they may be deemed less effective than an alternative for very elderly patients. The extent to which this nuance can be incorporated into clinical practice is uncertain and may be critical to the impact on health.

No empirical studies specifically focus on the relationship between evidence-based coverage and health outcomes.

The literature that describes the comparative effectiveness of medical treatments is largely a condition or population specific patchwork, making overall conclusions difficult. Clinical trials, which generally assign patients to a prescribed treatment course for the purpose of comparing outcomes, provide some information about the comparative effectiveness of treatments or procedures in a controlled setting. For example, Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) studied the effectiveness of different antipsychotics in treating schizophrenia. They found that an older antipsychotic medication was as effective as newer medications in reducing symptoms, although side effects varied among the treatment groups and among some subgroups of patients (Lieberman et al., 2005). The Antihypertensive and Lipid–Lowering Treatment to Prevent Heart Attack Trial (ALLHAT, 2002) compared several newer antihypertensive treatment regimens to older, less expensive diuretics in a population of hypertensive patients at high risk for adverse outcomes. It found that rates of fatal and nonfatal myocardial infarction were similar across treatment groups (ALLHAT, 2002). Summary reports also help to fill gaps in knowledge about effectiveness, but these sources do not provide information about what happens to health when the results of such studies are used to make coverage and benefit design decisions.

One study simulated the potential to use value based insurance design to increase adherence to medications among high risk patients without increasing total spending on the medication. The study found that if high risk patients faced no cost sharing through use of an annual license that eliminated per prescription copayments, compliance for prescribed statins would be higher than it would be under both copayment only and copayment plus license models (Goldman et al., 2008).

References

ALLHAT [Antihypertensive and Lipid–Lowering Treatment to Prevent Heart Attack Trial] Officers and Coordinators, "Major Outcomes in High–Risk Hypertensive Patients Randomized to Angiotensin–Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic," Journal of the American Medical Association, Vol. 288, 2002, pp. 2981–2997.

Goldman DP, Anupam BJ, Philipson T, Sun E, "Drug Licenses: A New Model for Pharmaceutical Pricing, Health Affairs, Vol. 27, No. 1, 2008, pp. 122–129.

Goldman DP, Joyce GF, Escarce JJ, et al. "Pharmacy Benefits and the Use of Drugs by the Chronically Ill," Journal of the American Medical Association, Vol. 291, No. 19, 2004, pp. 2344–2350.

Goldman DP, Joyce GF, Zheng Y, "Prescription Drug Cost Sharing: Associations with Medication and Medical Utilization and Spending and Health," Journal of the American Medical Association, Vol. 298, No. 1, July 4, 2007, pp. 61–69.

Joyce GF, Escarce JJ, Solomon MD, Goldman DP, "Employer Drug Benefit Plans and Spending on Prescription Drugs," Journal of the American Medical Association, Vol. 288, No. 14, 2002, pp. 17337ndash;1739.

Lieberman JA, et al. "Effectiveness of Antipsychotic Drugs in Patients with Chronic Schizophrenia," New England Journal of Medicine, Vol. 353, No. 12, 2005, pp. 1209–1223.

Solomon MD, Goldman DA, Joyce GF, Escarce JJ, "Cost Sharing and Initiation of Drug Therapy for the Chronically Ill," Archives of Internal Medicine, Vol. 169, No. 8, 2009, pp. 1–9.

Back to top

Coverage

Not Applicaple

Back to top

Capacity

There is no theoretical basis for linking use of comparative effectiveness research and capacity.

Back to top

Operational Feasibility

The operational feasibility associated with comparative effectiveness research depends on the applications that are pursued:

  • A national center for comparative effectiveness research could be initiated easily. Read more below
  • Translation of comparative effectiveness research into applications that could enhance clinical decisionmaking and the efficiency of the health services delivery system would be complex. Read more below

A national center for comparative effectiveness research could be initiated easily.

A center for comparative effectiveness research or a program of coordinated research funding could be instituted relatively easily. Decisions on funding and governance of such a center and on transparency in its operation would need to be made.

Options that have been proposed include incorporating comparative effectiveness research into, or transforming the mission of, an existing government agency, such as AHRQ; creating a new federal entity to conduct and disseminate the research; creating a quasi–governmental entity that would operate independently from the federal government; and creating a program to coordinate existing and new federally funded comparative effectiveness research, with the research itself conducted on a more decentralized basis (Wilensky, 2006).

The latter option mirrors provisions included in the federal American Recovery and Reinvestment Act of 2009 (ARRA), which contains $1.1 billion in new funding for comparative effectiveness research to be distributed among the AHRQ, the National Institutes of Health, and the Office of the Secretary of the Department of Health and Human Services (U.S. Congress, 2009). Under the plan, federally funded comparative effectiveness research would be coordinated by a council of federal employees from the various agencies with health care responsibilities.

There have been a number of efforts to establish centers for the evaluation of comparative effectiveness in the past. The Office of Technology Assessment, which operated between 1972 and 1995, provided Congress with analyses of technical and scientific issues, including the effectiveness of health technologies (Eisenberg and Zarin, 2002). AHRQ, which is in the U.S. Department of Health and Human Services, funds comparative effectiveness research through its Evidence–based Practice Centers (EPC) Program, which analyzes the scientific evidence for a variety of medical interventions (AHRQ, 2002). The Drug Effectiveness Review Project (DERP) is a collaboration among a number of private and public organizations that systematically reviews the comparative effectiveness and side effect profiles of drugs within the same therapeutic class (Oregon Health & Science University, not dated). Its information has been used by states in the formulation of their Medicaid Preferred Drug Lists (Padrez et al., 2005). Established in 1999, the Medicare Coverage Advisory Committee serves an advisory role for Medicare, analyzing the comparative effectiveness of some new services and technologies (Garber, 2001). Within the private sector, the Blue Cross Blue Shield Association established the Technology Evaluation Center (TEC), which provides comprehensive reviews of a variety of therapies, drugs, and devices (Wilensky, 2006). The Federal Drug Administration assesses the safety and efficacy of drugs and devices. These organizations generally serve in an advisory role, and they do not mandate any particular coverage decisions.

Translation of comparative effectiveness research into applications that could enhance clinical decisionmaking and the efficiency of the health services delivery system would be complex.

As is often the case, the translation of the results of comparative effectiveness research into practical applications (for example, for use in decision aids, in coverage determinations, and for incentive programs) presents challenges, particularly in information infrastructure and the extent to which costs are a consideration.

The Institute of Medicine, in an ongoing roundtable on what would be required to integrate comparative effectiveness into practice, cites the need for significant buy–in for the design, governance, and funding of the comparative effectiveness resources by diverse stakeholders (Institute of Medicine Roundtable on Evidence Based Medicine, 2008). The stakeholder base that could be affected by new uses of comparative effectiveness research is large and vocal. A major concern among stakeholders will be the extent to which they perceive that the research was conducted objectively (Wilensky, 2006). Any effort funded through the legislative process is subject to political challenges to its validity and usefulness. In addition, the specific prohibition in the ARRA of using research results for policy determinations limits the extent to which federal level action can take place.

There are significant practical considerations as well. As discussed in the sections above, the potential impact of comparative effectiveness research on practice depends on how frequently the information is used to affect decisionmaking. This requires that physicians have access to appropriate information at the point of decisionmaking. The information can be quite complex—including matching the results of research to the characteristics of the patient being seen by the physician—and may require decision support tools that are not in common use today. Such tools generally require that the physician have electronic medical records that can be linked to decision tools. These tools can be updated as new information emerges. A 2008 survey of electronic record use in ambulatory care reported that 4 percent of physicians had an extensive, fully functional electronic records system, and 13 percent had a basic system (DesRoches et al., 2008). More recently, the same group of researchers surveyed hospitals and reported that 1.5 percent of U.S. hospitals had a comprehensive electronic records system and that an additional 7.6 percent had a basic system (i.e., the system was present in at least one clinical unit). Seventeen percent of hospitals reported using computerized provider order entry for medications (Jha et al. 2009). These systems might also be necessary if coverage or incentive programs are applied in a more nuanced manner. With the exception of research that leads to a determination that a particular intervention should never be used (or covered), the more common result will be of the form, "this intervention is preferred in this class of patients and not preferred in this other class of patients." To effectively use such results in practice will require rapid adoption of information systems that are not in place today. Having the systems is an important first step, but physicians and patients also will have to be trained in the use of these systems.

References

Agency for Healthcare Research and Quality (AHRQ), What Is AHRQ? Rockville, Md., AHRQ Publication No. 02-0011, February 2002. As of August 11, 2009: http://www.ahrq.gov/about/whatis.htm

DesRoches CM, et al., "Electronic Health Records in Ambulatory Care—A National Survey of Physicians," New England Journal of Medicine, Vol. 359, No. 1, July 3, 2008, pp. 50–60.

Eisenberg JM, Zarin D, "Health Technology Assessment in the United States," International Journal of Technology Assessment in Health Care, Vol. 18, 2002, pp. 192–198.

Garber A, "Evidence–Based Coverage Policy," Health Affairs, Vol. 20, 2001, pp. 62–83.

Institute of Medicine Roundtable on Evidence Based Medicine, Annual Report: Learning Healthcare System Concepts, v. 2008, Washington, D.C.: Institute of Medicine of the National Academies, 2008. As of August 12, 2009: http://www.iom.edu/Object.File/Master/57/381/Learning%20Healthcare%20System%20Concepts%20v200.pdf

Jha AK, et al., "Use of Electronic Health Records in U.S. Hospitals," New England Journal of Medicine, Vol. 360, No. 16, April 16, 2009, pp. 1628–1638.

Oregon Health & Science University, Center for Evidence–based Policy, Drug Effectiveness Review Project (DERP), Web site, not dated. As of February 17, 2017: http://www.ohsu.edu/xd/research/centers-institutes/evidence-based-policy-center/evidence/derp/index.cfm

Padrez R, Carino T, Blum J, Mendelson D, The Use of Oregon's Evidence–Based Reviews for Medicaid Pharmacy Policies: Experiences in Four States, Menlo Park, Calif.: The Henry J. Kaiser Family Foundation, Kaiser Commission on Medicaid and the Uninsured, 2005.

U.S. Congress, 111th Cong., 1st Sess., American Recovery and Reinvestment Act of 2009, Washington, D.C., H.R. 1, January 6, 2009. As of August 12, 2009: http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=111_cong_bills&docid=f:h1enr.pdf

Wilensky GR, "Developing a Center for Comparative Effectiveness Information," Health Affairs, Web Exclusives, Vol. 25, No. 6, 2006, pp. w572–w585. Published online November 7, 2006. As of August 11, 2009: http://content.healthaffairs.org/cgi/content/abstract/25/6/w572

Back to top