Phoneme-to-Articulatory mapping using bidirectional gated RNN - Archive ouverte HAL Access content directly
Conference Papers Year :

Phoneme-to-Articulatory mapping using bidirectional gated RNN

(1) , (1)
1

Abstract

Deriving articulatory dynamics from the acoustic speech signal has been addressed in several speech production studies. In this paper, we investigate whether it is possible to predict articulatory dynamics from phonetic information without having the acoustic speech signal. The input data may be considered as not sufficiently rich acoustically, as probably there is no explicit coarticulation information but we expect that the phonetic sequence provides compact yet rich knowledge. Motivated by the recent success of deep learning techniques used in the acoustic-to-articulatory inversion, we have experimented around the bidirectional gated recurrent neural network archi-tectures. We trained these models with an EMA corpus, and have obtained good performances similar to the state-of-the-art articulatory inversion from LSF features, but using only the phoneme labels and durations.
Fichier principal
Vignette du fichier
1202_Paper.pdf (376.74 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01862587 , version 1 (27-08-2018)

Identifiers

  • HAL Id : hal-01862587 , version 1

Cite

Théo Biasutto– Lervat, Slim Ouni. Phoneme-to-Articulatory mapping using bidirectional gated RNN. Interspeech 2018 - 19th Annual Conference of the International Speech Communication Association, Sep 2018, Hyderabad, India. ⟨hal-01862587⟩
240 View
361 Download

Share

Gmail Facebook Twitter LinkedIn More