Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Multilingual Unsupervised Sentence Simplification

Abstract : Progress in Sentence Simplification has been hindered by the lack of supervised data, particularly in languages other than English. Previous work has aligned sentences from original and simplified corpora such as English Wikipedia and Simple English Wikipedia, but this limits corpus size, domain, and language. In this work, we propose using unsupervised mining techniques to automatically create training corpora for simplification in multiple languages from raw Common Crawl web data. When coupled with a controllable generation mechanism that can flexibly adjust attributes such as length and lexical complexity, these mined paraphrase corpora can be used to train simplification systems in any language. We further incorporate multilingual unsupervised pretraining methods to create even stronger models and show that by training on mined data rather than supervised corpora, we outperform the previous best results. We evaluate our approach on English, French, and Spanish simplification benchmarks and reach state-of-the-art performance with a totally unsupervised approach. We will release our models and code to mine the data in any language included in Common Crawl.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.inria.fr/hal-03109299
Contributor : Benoît Sagot <>
Submitted on : Wednesday, January 13, 2021 - 4:33:37 PM
Last modification on : Friday, January 15, 2021 - 3:08:35 AM

Links full text

Identifiers

  • HAL Id : hal-03109299, version 1
  • ARXIV : 2005.00352

Citation

Louis Martin, Angela Fan, Éric de la Clergerie, Antoine Bordes, Benoît Sagot. Multilingual Unsupervised Sentence Simplification. 2021. ⟨hal-03109299⟩

Share

Metrics

Record views

12