RAND previously developed nonpayment codes to capture the number and level of post-operative visits that are part of the global period. In the 2017 Medicare physician fee schedule proposed rule, CMS proposed data collection on post-operative visits using similar codes. This report summarizes RAND's pilot test of the proposed codes via a survey using vignettes to assess whether physicians understood and could correctly apply the codes.
Download eBook for Free
Format | File Size | Notes |
---|---|---|
PDF file | 0.3 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Research Questions
- Do physicians understand the proposed G-codes?
- How accurately do physicians and coding/billing staff apply the proposed G-codes to the vignettes?
The Centers for Medicare & Medicaid Services (CMS) uses the resource-based relative value system to determine payment for physicians and nonphysician practitioners for their professional services. For many surgeries and other types of procedures, Medicare payment includes pre- and post-operative visits delivered during a global period of 10 or 90 days. Congress mandated that CMS collect data on the "number and level" of visits in the global period from a representative sample of physicians beginning January 1, 2017. At CMS's request, RAND developed a new set of nonpayment codes that could be used to capture the number and level of visits. In July 2016, CMS issued a proposed rule that included a slightly modified version of the codes developed by RAND and proposed to require their use by practitioners. Given that these codes had never been tested or used by practitioners, CMS asked RAND to pilot the proposed codes to determine whether practitioners understood and could accurately apply the codes. RAND's approach was to create a series of vignettes and to test the use of these vignettes using semi-structured interviews with a small set of physicians, followed by more-extensive testing through surveys with a larger group of physicians. This report provides recommendations on how to use vignettes to test new codes and uncover questions about such codes. Such input could be used to help refine instructions for using codes, as well as to potentially refine the codes themselves.
Key Findings
- In interviews, individual physicians were able to apply the proposed G-codes to recent visits and the draft vignettes with reasonable accuracy. However, when we surveyed a larger group of physicians there was a roughly 30–40 percent error rate.
- Accuracy varied widely by the five specialties (cardiology, dermatology, general surgery, neurosurgery, and ophthalmology) in the survey, and the reasons for this are unclear. Each specialty was given a unique and distinct set of vignettes that were tailored to their specialty. Some vignettes may have been easier to code than others, which may explain some of this variation.
- Common concerns with the proposed G-codes emerged, which included the burden of reporting the codes, keeping track of time spent, the definition of "typical" and "complex," and how the codes capture work done by multiple practitioners.
Recommendations
- The methodology of using vignettes to test new codes could be considered prior to implementing similar codes in the fee schedule. We uncovered a number of questions and errors in both the interviews and the survey. Such input could be used to help refine instructions for practitioners and to potentially refine the codes themselves. This may help improve the overall accuracy of practitioner coding.
- Future work should explore how practitioners who use time-based codes track time, and whether they have difficulty accurately tracking time given that their care may extend over numerous encounters in a day. For example, one point of confusion for practitioners was how to round when using time increments. However, the rounding used for the proposed G-codes mimicked what is used for other time-based codes. Therefore, it is possible that practitioners are also confused by other time-based codes.
- Given the concern that physicians expressed about distinguishing between "typical" and "complex" visits, it may be useful to test whether practitioners are also struggling in deciding on the correct level in terms of complexity of decisionmaking in evaluation and management visits.
- As the larger health care system moves to more team-based care, distinguishing what work should be included when multiple practitioners are providing care will be increasingly important. Therefore, it may be useful to test existing codes for accuracy when care is provided by multiple practitioners.
Table of Contents
Chapter One
Introduction
Chapter Two
Approach to Testing the Proposed Nonpayment G-Codes
Chapter Three
Findings from Interviews on Nonpayment Codes
Chapter Four
Findings from the Survey Piloting Nonpayment Codes
Chapter Five
Lessons Learned from Piloting Nonpayment Codes for Capturing Post-Operative Care
Appendix A
Clinical Vignettes Used to Test Nonpayment Codes
Appendix B
Materials Provided to Interviewees
Appendix C
Survey Results by Specialty
The research described in this report was funded by the Centers for Medicare & Medicaid Services (CMS) and conducted by RAND Health.
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.