Multichannel Music Separation with Deep Neural Networks - Archive ouverte HAL Access content directly
Conference Papers Year :

Multichannel Music Separation with Deep Neural Networks

(1) , (1) , (1)


This article addresses the problem of multichannel music separation. We propose a framework where the source spectra are estimated using deep neural networks and combined with spatial covariance matrices to encode the source spatial characteristics. The parameters are estimated in an iterative expectation-maximization fashion and used to derive a multichannel Wiener filter. We evaluate the proposed framework for the task of music separation on a large dataset. Experimental results show that the method we describe performs consistently well in separating singing voice and other instruments from realistic musical mixtures.
Fichier principal
Vignette du fichier
eusipco_w_ack.pdf (385.77 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01334614 , version 1 (21-06-2016)
hal-01334614 , version 2 (14-06-2017)


  • HAL Id : hal-01334614 , version 2


Aditya Arie Nugraha, Antoine Liutkus, Emmanuel Vincent. Multichannel Music Separation with Deep Neural Networks. European Signal Processing Conference (EUSIPCO), Aug 2016, Budapest, Hungary. pp.1748-1752. ⟨hal-01334614v2⟩
508 View
1047 Download


Gmail Facebook Twitter LinkedIn More