Noisy UGC Translation at the Character Level: Revisiting Open-Vocabulary Capabilities and Robustness of Char-Based Models - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Noisy UGC Translation at the Character Level: Revisiting Open-Vocabulary Capabilities and Robustness of Char-Based Models

Résumé

This work explores the capacities of character-based Neural Machine Translation to translate noisy User-Generated Content (UGC) with a strong focus on exploring the limits of such approaches to handle productive UGC phenomena, which almost by definition, cannot be seen at training time. Within a strict zero-shot scenario, we first study the detrimental impact on translation performance of various user-generated content phenomena on a small annotated dataset we developed, and then show that such models are indeed incapable of handling unknown letters, which leads to catastrophic translation failure once such characters are encountered. We further confirm this behavior with a simple, yet insightful, copy task experiment and highlight the importance of reducing the vocabulary size hyper-parameter to increase the robustness of character-based models for machine translation.

Dates et versions

hal-03540174 , version 1 (23-01-2022)

Identifiants

Citer

José Carlos Rosales Núñez, Guillaume Wisniewski, Djamé Seddah. Noisy UGC Translation at the Character Level: Revisiting Open-Vocabulary Capabilities and Robustness of Char-Based Models. W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021), Association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic. ⟨hal-03540174⟩
48 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More