2 years ago
#50319

James
Calculating multi-label inter-annotator agreement in Python
Can anyone recommend a particular metric/python library for assessing the agreement between 3 annotators when the data can be assigned a combination of labels (as seen below)?
Msg_1 | Msg_2 | Msg_3 | Msg_4 | |
---|---|---|---|---|
Annotator_1 | a,b,c | b | c | a,b,c |
Annotator_2 | a,c | b | c | a |
Annotator_3 | b,c | a,b | c | a,b |
I have tried python Krippendorff's Alpha implementations, however they don't seem to work with multilabels.
Thanks.
python
annotations
training-data
multilabel-classification
0 Answers
Your Answer