Skip to Main content Skip to Navigation
New interface
Conference papers

The Zero Resource Speech Challenge 2021: Spoken language modelling

Abstract : We present the Zero Resource Speech Challenge 2021, which asks participants to learn a language model directly from audio, without any text or labels. The challenge is based on the Libri-light dataset, which provides up to 60k hours of audio from English audio books without any associated text. We provide a pipeline baseline system consisting on an encoder based on contrastive predictive coding (CPC), a quantizer ($k$-means) and a standard language model (BERT or LSTM). The metrics evaluate the learned representations at the acoustic (ABX discrimination), lexical (spot-the-word), syntactic (acceptability judgment) and semantic levels (similarity judgment). We present an overview of the eight submitted systems from four groups and discuss the main results.
Complete list of metadata
Contributor : Emmanuel Dupoux Connect in order to contact the contributor
Submitted on : Monday, October 11, 2021 - 3:39:36 PM
Last modification on : Friday, November 18, 2022 - 9:24:55 AM


2104.14700 (1).pdf
Files produced by the author(s)




Ewan Dunbar, Mathieu Bernard, Nicolas Hamilakis, Tu Anh Nguyen, Maureen de Seyssel, et al.. The Zero Resource Speech Challenge 2021: Spoken language modelling. Interspeech 2021 - Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic. ⟨10.1109/TPAMI.2021.3083839⟩. ⟨hal-03329301v2⟩



Record views


Files downloads