Skip to Main content Skip to Navigation
Conference papers

On-the-fly audio source separation

Abstract : This paper addresses the challenging task of single channel audio source separation. We introduce a novel concept of on-the-fly audio source separation which greatly simplifies the user's interaction with the system compared to the state-of-the-art user-guided approaches. In the proposed framework, the user is only asked to listen to an audio mixture and type some keywords (e.g. "dog barking", "wind", etc.) describing the sound sources to be separated. These keywords are then used as text queries to search for audio examples from the internet to guide the separation process. In particular, we propose several approaches to efficiently exploit these retrieved examples, including an approach based on a generic spectral model with group sparsity-inducing constraints. Finally, we demonstrate the effectiveness of the proposed framework with mixtures containing various types of sounds.
Complete list of metadata

Cited literature [19 references]  Display  Hide  Download
Contributor : Alexey Ozerov Connect in order to contact the contributor
Submitted on : Friday, July 11, 2014 - 4:10:54 PM
Last modification on : Monday, July 14, 2014 - 8:52:53 AM
Long-term archiving on: : Saturday, October 11, 2014 - 1:05:10 PM


Files produced by the author(s)


  • HAL Id : hal-01023221, version 1


Dalia El Badawy, Ngoc Q. K. Duong, Alexey Ozerov. On-the-fly audio source separation. the 24th IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2014), Sep 2014, Reims, France. ⟨hal-01023221⟩



Record views


Files downloads