Comparing the methods of measuring multi-rater agreement on an ordinal rating scale: a simulation study with an application to real data


SERTDEMİR Y., BURGUT H. R., ALPARSLAN Z. N., ÜNAL İ., GÜNAŞTI S.

JOURNAL OF APPLIED STATISTICS, vol.40, no.7, pp.1506-1519, 2013 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 40 Issue: 7
  • Publication Date: 2013
  • Doi Number: 10.1080/02664763.2013.788617
  • Journal Name: JOURNAL OF APPLIED STATISTICS
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.1506-1519
  • Çukurova University Affiliated: Yes

Abstract

Agreement among raters is an important issue in medicine, as well as in education and psychology. The agreement among two raters on a nominal or ordinal rating scale has been investigated in many articles. The multi-rater case with normally distributed ratings has also been explored at length. However, there is a lack of research on multiple raters using an ordinal rating scale. In this simulation study, several methods were compared with analyze rater agreement. The special case that was focused on was the multi-rater case using a bounded ordinal rating scale. The proposed methods for agreement were compared within different settings. Three main ordinal data simulation settings were used (normal, skewed and shifted data). In addition, the proposed methods were applied to a real data set from dermatology. The simulation results showed that the Kendall's W and mean gamma highly overestimated the agreement in data sets with shifts in data. ICC4 for bounded data should be avoided in agreement studies with rating scales<5, where this method highly overestimated the simulated agreement. The difference in bias for all methods under study, except the mean gamma and Kendall's W, decreased as the rating scale increased. The bias of ICC3 was consistent and small for nearly all simulation settings except the low agreement setting in the shifted data set. Researchers should be careful in selecting agreement methods, especially if shifts in ratings between raters exist and may apply more than one method before any conclusions are made.