Skip to Main content Skip to Navigation
Conference papers

Explaining deep learning models for speech enhancement

Sunit Sivasankaran 1 Emmanuel Vincent 2 Dominique Fohr 2 
2 MULTISPEECH - Speech Modeling for Facilitating Oral-Based Communication
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
Abstract : We consider the problem of explaining the robustness of neural networks used to compute time-frequency masks for speech enhancement to mismatched noise conditions. We employ the Deep SHapley Additive exPlanations (DeepSHAP) feature attribution method to quantify the contribution of every timefrequency bin in the input noisy speech signal to every timefrequency bin in the output time-frequency mask. We define an objective metric-referred to as the speech relevance scorethat summarizes the obtained SHAP values and show that it correlates with the enhancement performance, as measured by the word error rate on the CHiME-4 real evaluation dataset. We use the speech relevance score to explain the generalization ability of three speech enhancement models trained using synthetically generated speech-shaped noise, noise from a professional sound effects library, or real CHiME-4 noise. To the best of our knowledge, this is the first study on neural network explainability in the context of speech enhancement.
Complete list of metadata

https://hal.inria.fr/hal-03257450
Contributor : Sunit Sivasankaran Connect in order to contact the contributor
Submitted on : Friday, June 11, 2021 - 12:38:36 AM
Last modification on : Thursday, May 5, 2022 - 10:28:18 AM
Long-term archiving on: : Sunday, September 12, 2021 - 6:03:28 PM

File

dnn_explain_revised.pdf
Files produced by the author(s)

Identifiers

Citation

Sunit Sivasankaran, Emmanuel Vincent, Dominique Fohr. Explaining deep learning models for speech enhancement. INTERSPEECH 2021, Aug 2021, Brno, Czech Republic. ⟨10.21437/Interspeech.2021-1764⟩. ⟨hal-03257450⟩

Share

Metrics

Record views

252

Files downloads

786