Measuring the Typicality of Text

Using Multiple Coders for More Than Just Reliability and Validity Checks

Published in: Human Organization, v. 58, no. 3, Fall 1999, p. 313-322

Posted on on January 01, 1999

by Gery W. Ryan

Read More

Access further information on this document at

This article was published outside of RAND. The full text of the article can be found at the link above.

Social scientists often use agreement among multiple coders to check the reliability and validity of the analytic process. High degrees of interceder agreement indicate that multiple coders are applying the codes in the same manner and are thus acting as reliable measurement instruments. Coders who independently mark the same text for a theme provide evidence that a theme has external validity and is not just a figment of the investigator's imagination. In this article, I extend the use of multiple coders. I use data taken from clinicians' descriptions of personal illness experiences to demonstrate how agreement and disagreement among coders can be used to measure core and peripheral features of abstract constructs and themes. I then show how such measures of multicoder agreement can be used to identify typical or exemplary examples from a corpus of text.

This report is part of the RAND Corporation External publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.