Computing inter‐rater reliability and its variance in the presence of high agreement

Volume: 61, Issue: 1, Pages: 29 - 48
Published: May 1, 2008
Abstract
Pi (π) and kappa (κ) statistics are widely used in the areas of psychiatry and psychological testing to compute the extent of agreement between raters on nominally scaled data. It is a fact that these coefficients occasionally yield unexpected results in situations known as the paradoxes of kappa. This paper explores the origin of these limitations, and introduces an alternative and more stable agreement coefficient referred to as the AC 1...
Paper Details
Title
Computing inter‐rater reliability and its variance in the presence of high agreement
Published Date
May 1, 2008
Volume
61
Issue
1
Pages
29 - 48
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.