Publication

Kappa Coefficients for Missing Data

de Raadt, A., Warrens, M. J., Bosker, R. J. & Kiers, H. A. L., Jun-2019, In : Educational and Psychological Measurement. 79, 3, p. 558-576 19 p.

Research output: Contribution to journalArticleAcademicpeer-review

Copy link to clipboard

Documents

DOI

Cohen’s kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen’s kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data under two missing data mechanisms—namely, missingness completely at random and a form of missingness not at random. The kappa coefficient considered in Gwet (Handbook of Inter-rater Reliability, 4th ed.) and the kappa coefficient based on listwise deletion of units with missing ratings were found to have virtually no bias and mean squared error if missingness is completely at random, and small bias and mean squared error if missingness is not at random. Furthermore, the kappa coefficient that treats missing ratings as a regular category appears to be rather heavily biased and has a substantial mean squared error in many of the simulations. Because it performs well and is easy to compute, we recommend to use the kappa coefficient that is based on listwise deletion of missing ratings if it can be assumed that missingness is completely at random or not at random.
Original languageEnglish
Pages (from-to)558-576
Number of pages19
JournalEducational and Psychological Measurement
Volume79
Issue number3
Publication statusPublished - Jun-2019

    Keywords

  • COHENS KAPPA, AGREEMENT, RELIABILITY

View graph of relations

Download statistics

No data available

ID: 82665056