Kappa Coefficients for Missing Datade Raadt, A., Warrens, M. J., Bosker, R. J. & Kiers, H. A. L., Jun-2019, In : Educational and Psychological Measurement. 79, 3, p. 558-576 19 p.
Research output: Contribution to journal › Article › Academic › peer-review
Cohen’s kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen’s kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data under two missing data mechanisms—namely, missingness completely at random and a form of missingness not at random. The kappa coefficient considered in Gwet (Handbook of Inter-rater Reliability, 4th ed.) and the kappa coefficient based on listwise deletion of units with missing ratings were found to have virtually no bias and mean squared error if missingness is completely at random, and small bias and mean squared error if missingness is not at random. Furthermore, the kappa coefficient that treats missing ratings as a regular category appears to be rather heavily biased and has a substantial mean squared error in many of the simulations. Because it performs well and is easy to compute, we recommend to use the kappa coefficient that is based on listwise deletion of missing ratings if it can be assumed that missingness is completely at random or not at random.
|Number of pages||19|
|Journal||Educational and Psychological Measurement|
|Publication status||Published - Jun-2019|
- COHENS KAPPA, AGREEMENT, RELIABILITY
No data available