Multichannel Speech Enhancement Based on Time-frequency Masking Using Subband Long Short-Term Memory - Archive ouverte HAL Access content directly
Conference Papers Year : 2019

Multichannel Speech Enhancement Based on Time-frequency Masking Using Subband Long Short-Term Memory

(1) , (1)
1

Abstract

We propose a multichannel speech enhancement method using along short-term memory (LSTM) recurrent neural network. The proposed method is developed in the short time Fourier transform (STFT) domain. An LSTM network common to all frequency bands is trained, which processes each frequency band individually by mapping the multichannel noisy STFT coefficient sequence to its corresponding STFT magnitude ratio mask sequence of one reference channel. This subband LSTM network exploits the differences between temporal/spatial characteristics of speech and noise, namely speech source is non-stationary and coherent, while noise is stationary and less spatially-correlated. Experiments with different types of noise show that the proposed method outperforms the baseline deep-learning-based full-band method and unsupervised method. In addition, since it does not learn the wideband spectral structure of either speech or noise, the proposed subband LSTM network generalizes very well to unseen speakers and noise types.
Fichier principal
Vignette du fichier
Xiaofei_WASPAA2019.pdf (239.65 Ko) Télécharger le fichier
Vignette du fichier
WASPAA2019_Presentation.pdf (1.69 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02264247 , version 1 (06-08-2019)
hal-02264247 , version 2 (14-10-2019)

Identifiers

Cite

Xiaofei Li, Radu Horaud. Multichannel Speech Enhancement Based on Time-frequency Masking Using Subband Long Short-Term Memory. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct 2019, New Paltz, NY, United States. pp.298-302, ⟨10.1109/WASPAA.2019.8937218⟩. ⟨hal-02264247v2⟩
730 View
1010 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More