Skip to Main content Skip to Navigation
New interface
Conference papers

Supersense tagging with inter-annotator disagreement

Abstract : Linguistic annotation underlies many successful approaches in Natural Language Processing (NLP), where the annotated corpora are used for training and evaluating supervised learners. The consistency of annotation limits the performance of supervised models, and thus a lot of effort is put into obtaining high-agreement annotated datasets. Recent research has shown that annotation disagreement is not random noise, but carries a systematic signal that can be used for improving the supervised learner. However, prior work was limited in scope, focusing only on part-of-speech tagging in a single language. In this paper we broaden the experiments to a semantic task (supersense tagging) using multiple languages. In particular, we analyse how systematic disagreement is for sense annotation, and we present a preliminary study of whether patterns of disagreements transfer across languages.
Complete list of metadata

Cited literature [20 references]  Display  Hide  Download
Contributor : Héctor Martínez Alonso Connect in order to contact the contributor
Submitted on : Wednesday, January 4, 2017 - 7:09:08 PM
Last modification on : Tuesday, October 25, 2022 - 6:54:05 PM
Long-term archiving on: : Wednesday, April 5, 2017 - 3:17:51 PM


Files produced by the author(s)


  • HAL Id : hal-01426747, version 1


Hector Martinez Alonso, Anders Johannsen, Barbara Plank. Supersense tagging with inter-annotator disagreement. Linguistic Annotation Workshop 2016, Aug 2016, Berlin, Germany. pp.43 - 48. ⟨hal-01426747⟩



Record views


Files downloads