Deep neural network based multichannel audio source separation - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Chapitre D'ouvrage Année : 2018

Deep neural network based multichannel audio source separation

Résumé

This chapter presents a multichannel audio source separation framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. Different design choices and their impact on the performance are discussed. They include the cost functions for DNN training, the number of parameter updates, the use of multiple DNNs, and the use of weighted parameter updates. Finally, we present its application to a speech enhancement task and a music separation task. The experimental results show the benefit of the multichannel DNN-based approach over a single-channel DNN-based approach and the multichannel nonnegative matrix factorization based iterative EM framework.
Fichier principal
Vignette du fichier
nugraha_book18.pdf (760.62 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01633858 , version 1 (13-11-2017)

Identifiants

Citer

Aditya Arie Nugraha, Antoine Liutkus, Emmanuel Vincent. Deep neural network based multichannel audio source separation. Audio Source Separation, Springer, pp.157-195, 2018, 978-3-319-73030-1. ⟨10.1007/978-3-319-73031-8_7⟩. ⟨hal-01633858⟩
533 Consultations
1606 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More