Characterizing Patient Requests and Physician Responses in Office Practice
ResearchPosted on rand.org 2002Published in: Health Services Research, v. 37, no. 1, Feb. 2002, p. 215-235
ResearchPosted on rand.org 2002Published in: Health Services Research, v. 37, no. 1, Feb. 2002, p. 215-235
OBJECTIVE: To assess the reliability, applicability, and validity of a refined system (taxonomy of requests by patients [TORP]) for characterizing patient requests and physician responses in office practice. STUDY SETTINGS: Data were obtained from visits to six general internists practicing in North-Central California in 1994 and eight cardiologists practicing in the same region in 1998. STUDY DESIGN: This was an observational study of patient requests and physician responses in two practice settings. Patients were surveyed before and after the visit. Physicians were surveyed immediately after the visit, and all visits were audio recorded for future study. DATA COLLECTION/EXTRACTION METHODS: TORP was refined using input from a multidisciplinary panel. Audiotape recordings of 131 visits (71 in internal medicine and 60 in cardiology) were rated independently by two coders. Estimates of classifying reliability (intercoder agreement on the sorting of requests into categories) and unitizing reliability (intercoder agreement on the labeling of elements of discourse as requests and subsequent classification into categories) were calculated. Validity was assessed by testing three specific hypotheses concerning the antecedents and consequences of patient requests and request fulfillment. PRINCIPAL FINDINGS: The overall unitizing kappa for identifying patients' requests was 0.64, and the classification kappa was 0.73, indicating substantial agreement beyond chance. The average patient made 4.19 requests for information and 0.88 requests for physician action; there were few differences in the spectrum of requests between internal medicine and cardiology. Approximately 15 percent of visits included a direct request for completion of paperwork. Patients who were very or extremely worried about their health made more requests than those who were not (6.06 vs. 3.89, p < 0.05). Visits involving more patient requests took longer (p < 0.05) and were perceived as more demanding by the treating physician (p = 0.025). The vast majority of requests were fulfilled. Conclusions. The refined TORP shows evidence of both unitizing and classification reliability and should he a useful tool for understanding the clinical negotiation. In addition, the system appears applicable to both generalist and specialist practices. More experience with the system is necessary to appraise TORP's ability to predict important clinical outcomes.
This publication is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.