Suppose we would like to compare two raters using a kappa statistic but the raters have different range of scores. This situation most often presents itself where one of the raters did not use the same range of scores as the other rater.
Let us consider an example where two graduate students where asked to rate 12 movies based on a scale from 1-3. One rater used all of the three scores possible while rating the movies whereas the other student did not like any of the movies and therefore rated all of them as either a 1 or a 2. Thus, the range of scores is the not the same for the two raters.
Stata’s command kap is for estimating inter-rater agreement and it can handle the situations where the two variables have the same categories and other situations where they don’t, which is the case presented above. Here is an example.
input rater1 rater2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 2 3 2 3 2 3 2 end
kap rater1 rater2 Expected Agreement Agreement Kappa Std. Err. Z Prob>Z ----------------------------------------------------------------- 66.67% 33.33% 0.5000 0.1667 3.00 0.0013