OBJECTIVE: To evaluate the interrater reliability of rheumatologist diagnosis of rheumatoid arthritis (RA) and the concordance between rheumatologist and computer algorithms for assessing the accuracy of a diagnosis of RA. METHODS: Self-reported data regarding symptoms and signs for a diagnosis of RA were considered by a panel of rheumatologists and by computer algorithms to assess the probability of a diagnosis of RA for 90 patients. The rheumatologists' review was validated through medical record. RESULTS: The interrater reliability among rheumatologists regarding a diagnosis of RA was 84%; the chance-corrected agreement (kappa) was 0.66. Agreement between the rheumatologists' rating and the best-performing algorithm was 95%. Using rheumatologist's review as a standard, the sensitivity of the algorithm was 100%, specificity was 88%, and the positive predictive value was 91%. The validation of rheumatologist's review by medical record showed 81% sensitivity, 60% specificity, and 78% positive predictive value. CONCLUSION: Reliability of rheumatologists' assignment of a diagnosis of RA by using self-report data is good. Algorithms defining symptoms as either joint swelling or tenderness with symptom duration 4 weeks have a better agreement with rheumatologist's diagnosis than do ones relying on a longer symptom duration. RELEVANCE: These findings have important implications for health services research and quality improvement interventions pertinent to case finding for RA through self-report data.
This publication is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.