Recent evaluations of two regional medical home pilots (i.e., efforts to improve the capabilities and performance of primary care practices) within the Pennsylvania Chronic Care Initiative (PACCI) have produced differing results.
In the southeast region of the state, the intervention was associated with improvements in diabetes care, but no changes in other measures of quality, utilization, or costs relative to comparison practices. By contrast, the northeast region's intervention was associated with favorable changes, relative to comparison practices, in a wider array of quality measures as well as reductions in rates of hospital admissions, emergency department visits, and ambulatory visits to specialists.
Both studies used the same evaluation methods, so it is fair to compare them and instructive to ask: Why do we see these differences? As evaluators and conveners of these regional pilots, we believe both differences in “nature” (i.e., the local contexts into which these interventions were applied) and “nurture” (i.e., the interventions themselves) are responsible.
Nature: Differences In Context
Compared to primary care practices that participated in the southeast PACCI, those in the northeast PACCI had several advantages at baseline.
First, the northeast practices may have been “right sized” for rapid transformation: not too big to change quickly, but not so small that they lacked resources to make new capital and personnel investments. And when practice sites were small, they tended to be affiliated with larger provider organizations (Intermountain, Physicians Health Alliance, and Geisinger) that could bankroll and otherwise support their transformation (e.g., through access to “back office” executive leadership with significant organizational experience in care management, which was present for six practices).
Through participation in the northeast PACCI learning collaborative, intervention conveners observed that practices lacking this expertise at baseline learned quickly from their peers. In contrast, practices in the southeast were predominantly small, independent private practices or much larger organizations (academic medical centers and community health centers) with less preexisting experience in care management.
Second, conveners observed differences in practice culture at baseline. In the northeast PACCI, physicians were more accustomed and receptive to practice transformation that was directed and facilitated by their practice leaders. In contrast, for some practices in the southeast PACCI, conveners noted initial physician non-participation in, and resistance to, new initiatives by practice leaders.
Third, the southeast regional pilot included community health centers and teaching hospitals that focused on underserved, sociodemographically vulnerable populations. The northeast region did not have the same representation of such providers, and while these practices did not serve a wealthy population, they may not have faced the same degree of sociodemographic challenges as some of the southeast practices.
Fourth, the northeast region was more rural, with few hospital options and more consistent use of the same hospital over time by patients of a given practice, facilitating hospital-primary care relationships. In contrast, the southeast region was a large metropolitan area served by numerous hospitals, complicating the task of tracking hospital and emergency department care.
Fifth, evaluations of each regional pilot found that approximately one-third of the southeast practices adopted new Electronic Health Records (EHRs) during the intervention, while all of the northeast practices already had EHRs at baseline. Adopting a new EHR can be stressful to physicians and staff, disrupt longstanding workflow, and distract from other aspects of practice transformation.
There is only so much a practice can do at once, and the starting position of the southeast practices may have put them at a significant disadvantage relative to their northeast counterparts.
Nurture: Differences In The Interventions Themselves
As described in the evaluations of these pilots, there were multiple differences between the northeast and southeast PACCI interventions. These changes reflect convener lessons learned from the earlier southeast PACCI implementation at the time of the design of the later northeast PACCI implementation, as well as design input from the northeast participants. Here, we review the differences that were observed by conveners to be the most impactful.
First, in the southeast region, supplemental payments to practices were contingent on receiving National Committee for Quality Assurance (NCQA) medical home recognition, with greater payments for higher recognition levels. As a result, the southeast practices focused their early energies on applying for recognition, rather than engaging in practice transformation.
For example, one large southeast practice achieved early NCQA recognition by devoting extensive resources to the task. Having reached the threshold for increased payment, the practice was slow to then implement team-based care, planned care at every visit, population management, and a high-functioning care management infrastructure. In contrast, in the northeast intervention supplemental payments (half earmarked for care management) began on day one and were not contingent on NCQA recognition, which was only required by month 18.
Second, the northeast region was given a specific per-member, per-month payment to be used for care management only, whereas the southeast supplemental payment was a combined payment that did not make funds distribution contingent on hiring a care manager and implementing care management.
Third, the health plans participating in the northeast region provided greater direct support for care management than in the southeast region. In addition to providing utilization data to primary care practices (which might otherwise not receive timely feedback on hospital and emergency department use by their patients), the northeast health plans devoted their own personnel to giving personalized feedback and guidance to the participating practices.
Other studies have similarly observed the importance of data and data management to practices' performance under alternative payment models like those featured in the PACCI. In the southeast region, plan support for practice care management did not begin until year three of the pilot, and even then was inconsistently provided by the health plans. Conveners have noted that in later years (following the time period covered in published evaluations), practices in the southeast region showed steady improvement on quality measures.
Third, the northeast PACCI included a shared savings financial incentive for participating practices that also required practices to meet specified quality benchmarks. Conveners observed that at the end of year one, large shared savings payments to some of the practices in the northeast PACCI was a significant motivator. The southeast PACCI did not include new incentives directly linked to the quality and costs of care until after the initial evaluation period.
As with all observational analyses of health system innovations, confounding by unmeasured variables also could contribute to the differences between the apparent effectiveness of the northeast and southeast PACCI medical home interventions. But even in the absence of such confounding, the factors we have listed are, at best, plausible explanations for differences in evaluation results.
With only two regions to compare, we cannot identify which—if any—of these potential explanations truly contributed to the relative effectiveness of these interventions.
Putting It Together
No two medical home interventions are exactly alike, and recent studies have demonstrated their heterogeneity. Similarly, the context and setting of medical interventions differ widely and can have significant effects on their outcomes.
As demonstrated in Pennsylvania, conveners of medical home pilots are drawing from their early experiences to refine their intervention designs and improve their effectiveness. Such a learning process is likely to be a critical part of successfully implementing any new model of payment and care delivery.
Therefore we believe there is more to learn by studying the differences between medical home interventions than by lumping them together. Publishing “negative” as well as “positive” results is critical to forming an evidence base for improving the effectiveness of medical home interventions.
By examining such differences between just two medical home pilots with disparate results, we have identified several features that might enhance the performance of future efforts. However, we will need evaluations of many more pilots, rigorously conducted and using similar methods (e.g., following recommendations of the Commonwealth Fund's Patient-Centered Medical Home Evaluators' Collaborative and Agency for Healthcare Research and Quality Patient Centered Medical Home Resource Center), to be able to identify the active ingredients with confidence.
Mark Friedberg is a senior natural scientist at the RAND Corporation; Connie Sixta is senior consultant for Sixta Consulting, Inc., adjunct faculty at UTHSC, Houston, and has directed numerous national, international, and regional quality improvement initiatives; and Michael Bailit is president of Bailit Health Purchasing, LLC, a health care consulting firm.
This commentary was first published on June 19, 2015 on Health Affairs Blog. Copyright ©2015 Health Affairs by Project HOPE — The People-to-People Health Foundation, Inc.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.