Measuring the Typicality of Text
Using Multiple Coders for More Than Just Reliability and Validity Checks
Published in: Human Organization, v. 58, no. 3, Fall 1999, p. 313-322
Posted on RAND.org on January 01, 1999
Social scientists often use agreement among multiple coders to check the reliability and validity of the analytic process. High degrees of interceder agreement indicate that multiple coders are applying the codes in the same manner and are thus acting as reliable measurement instruments. Coders who independently mark the same text for a theme provide evidence that a theme has external validity and is not just a figment of the investigator's imagination. In this article, I extend the use of multiple coders. I use data taken from clinicians' descriptions of personal illness experiences to demonstrate how agreement and disagreement among coders can be used to measure core and peripheral features of abstract constructs and themes. I then show how such measures of multicoder agreement can be used to identify typical or exemplary examples from a corpus of text.