Skip to Main content Skip to Navigation
Conference papers

Adaptive filtering for music/voice separation exploiting the repeating musical structure

Abstract : The separation of the lead vocals from the background accompani- ment in audio recordings is a challenging task. Recently, an efficient method called REPET (REpeating Pattern Extraction Technique) has been proposed to extract the repeating background from the non- repeating foreground. While effective on individual sections of a song, REPET does not allow for variations in the background (e.g. verse vs. chorus), and is thus limited to short excerpts only. We overcome this limitation and generalize REPET to permit the processing of complete musical tracks. The proposed algorithm tracks the period of the repeating structure and computes local estimates of the background pattern. Separation is performed by soft time-frequency masking, based on the deviation between the current observation and the estimated background pattern. Evaluation on a dataset of 14 complete tracks shows that this method can perform at least as well as a recent competitive music/voice separation method, while being computationally efficient.
Complete list of metadata

Cited literature [15 references]  Display  Hide  Download

https://hal.inria.fr/hal-00945300
Contributor : Antoine Liutkus <>
Submitted on : Thursday, March 13, 2014 - 4:03:29 PM
Last modification on : Wednesday, October 14, 2020 - 1:11:49 PM
Long-term archiving on: : Friday, June 13, 2014 - 10:41:44 AM

File

adaptive_repet.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Antoine Liutkus, Zafar Rafii, Roland Badeau, Bryan Pardo, Gael Richard. Adaptive filtering for music/voice separation exploiting the repeating musical structure. 37th International Conference on Acoustics, Speech, and Signal Processing ICASSP'12, 2012, Kyoto, Japan. pp.53--56, ⟨10.1109/ICASSP.2012.6287815⟩. ⟨hal-00945300⟩

Share

Metrics

Record views

310

Files downloads

1555