Supersense tagging with inter-annotator disagreement

Abstract : Linguistic annotation underlies many successful approaches in Natural Language Processing (NLP), where the annotated corpora are used for training and evaluating supervised learners. The consistency of annotation limits the performance of supervised models, and thus a lot of effort is put into obtaining high-agreement annotated datasets. Recent research has shown that annotation disagreement is not random noise, but carries a systematic signal that can be used for improving the supervised learner. However, prior work was limited in scope, focusing only on part-of-speech tagging in a single language. In this paper we broaden the experiments to a semantic task (supersense tagging) using multiple languages. In particular, we analyse how systematic disagreement is for sense annotation, and we present a preliminary study of whether patterns of disagreements transfer across languages.
Type de document :
Communication dans un congrès
Linguistic Annotation Workshop 2016, Aug 2016, Berlin, Germany. pp.43 - 48
Liste complète des métadonnées

Littérature citée [20 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01426747
Contributeur : Héctor Martínez Alonso <>
Soumis le : mercredi 4 janvier 2017 - 19:09:08
Dernière modification le : mardi 9 janvier 2018 - 09:50:02
Document(s) archivé(s) le : mercredi 5 avril 2017 - 15:17:51

Fichier

W16-1706.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01426747, version 1

Collections

Citation

Hector Martinez Alonso, Anders Johannsen, Barbara Plank. Supersense tagging with inter-annotator disagreement. Linguistic Annotation Workshop 2016, Aug 2016, Berlin, Germany. pp.43 - 48. 〈hal-01426747〉

Partager

Métriques

Consultations de la notice

130

Téléchargements de fichiers

80