This paper extends latent class models for the analysis of rater agreement to the case of agreement on ordered category ratings. Among the important features of this approach are that it provides a method for combining multiple ratings--which is ordinarily difficult to accomplish when ratings are made on an ordered category scale--and estimates the accuracy of individual and combined ratings. It also permits statistical analysis of the specific effects of rater bias and measurement error in determining rater agreement and disagreement. A plausible logistic function/threshold model is applied to reduce the number of parameters that require estimation. The model is equally applicable to the analysis of agreement among two, or more than two, raters. Useful special cases of the model are discussed, methods described for evaluation and comparison, and the model shown to fit actual data well. A variant of the model applicable to the case of replicate measurement is also presented.