Token-level and sequence-level loss smoothing for RNN language models

Abstract : Despite the effectiveness of recurrent neu-ral network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens are predicted given ground-truth sequences, while at test time prediction is conditioned on generated output sequences. To overcome these limitations we build upon the recent reward augmented maximum likelihood approach i.e. sequence-level smoothing that encourages the model to predict sentences close to the ground truth according to a given performance metric. We extend this approach to token-level loss smoothing, and propose improvements to the sequence-level smoothing approach. Our experiments on two different tasks, image captioning and machine translation, show that token-level and sequence-level loss smoothing are complementary, and significantly improve results.
Type de document :
Communication dans un congrès
ACL - 56th Annual Meeting of the Association for Computational Linguistics , Jul 2018, Melbourne, Australia
Liste complète des métadonnées

Littérature citée [48 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01790879
Contributeur : Thoth Team <>
Soumis le : lundi 14 mai 2018 - 10:19:13
Dernière modification le : jeudi 11 octobre 2018 - 08:48:03
Document(s) archivé(s) le : mardi 25 septembre 2018 - 22:25:32

Fichier

paper.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01790879, version 1

Citation

Maha Elbayad, Laurent Besacier, Jakob Verbeek. Token-level and sequence-level loss smoothing for RNN language models. ACL - 56th Annual Meeting of the Association for Computational Linguistics , Jul 2018, Melbourne, Australia. 〈hal-01790879〉

Partager

Métriques

Consultations de la notice

319

Téléchargements de fichiers

297