Speech enhancement with variational autoencoders and alpha-stable distributions

Simon Leglaive 1 Umut Simsekli 2 Antoine Liutkus 3 Laurent Girin 4 Radu Horaud 1
1 PERCEPTION - Interpretation and Modelling of Images and Videos
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
3 ZENITH - Scientific Data Management
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier, CRISAM - Inria Sophia Antipolis - Méditerranée
GIPSA-DPC - Département Parole et Cognition
Abstract : This paper focuses on single-channel semi-supervised speech enhancement. We learn a speaker-independent deep generative speech model using the framework of variational autoencoders. The noise model remains unsupervised because we do not assume prior knowledge of the noisy recording environment. In this context, our contribution is to propose a noise model based on alpha-stable distributions, instead of the more conventional Gaussian non-negative matrix factorization approach found in previous studies. We develop a Monte Carlo expectation-maximization algorithm for estimating the model parameters at test time. Experimental results show the superiority of the proposed approach both in terms of perceptual quality and intelligibility of the enhanced speech signal.
Complete list of metadatas

Cited literature [19 references]  Display  Hide  Download

Contributor : Simon Leglaive <>
Submitted on : Friday, February 8, 2019 - 11:25:30 AM
Last modification on : Saturday, October 12, 2019 - 1:34:17 AM
Long-term archiving on : Thursday, May 9, 2019 - 1:46:30 PM


Files produced by the author(s)



Simon Leglaive, Umut Simsekli, Antoine Liutkus, Laurent Girin, Radu Horaud. Speech enhancement with variational autoencoders and alpha-stable distributions. ICASSP 2019 - International Conference on Acoustics Speech and Signal Processing, May 2019, Brighton, United Kingdom. pp.541-545, ⟨10.1109/ICASSP.2019.8682546⟩. ⟨hal-02005106⟩



Record views


Files downloads