On-the-fly audio source separation - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2014

On-the-fly audio source separation

Abstract

This paper addresses the challenging task of single channel audio source separation. We introduce a novel concept of on-the-fly audio source separation which greatly simplifies the user's interaction with the system compared to the state-of-the-art user-guided approaches. In the proposed framework, the user is only asked to listen to an audio mixture and type some keywords (e.g. "dog barking", "wind", etc.) describing the sound sources to be separated. These keywords are then used as text queries to search for audio examples from the internet to guide the separation process. In particular, we propose several approaches to efficiently exploit these retrieved examples, including an approach based on a generic spectral model with group sparsity-inducing constraints. Finally, we demonstrate the effectiveness of the proposed framework with mixtures containing various types of sounds.
Fichier principal
Vignette du fichier
ElBadawy_et_al_2014.pdf (1.17 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01023221 , version 1 (11-07-2014)

Identifiers

  • HAL Id : hal-01023221 , version 1

Cite

Dalia El Badawy, Ngoc Q. K. Duong, Alexey Ozerov. On-the-fly audio source separation. the 24th IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2014), Sep 2014, Reims, France. ⟨hal-01023221⟩
166 View
830 Download

Share

Gmail Facebook X LinkedIn More