Skip to Main content Skip to Navigation
New interface
Conference papers

Adaptive filtering for music/voice separation exploiting the repeating musical structure

Abstract : The separation of the lead vocals from the background accompani- ment in audio recordings is a challenging task. Recently, an efficient method called REPET (REpeating Pattern Extraction Technique) has been proposed to extract the repeating background from the non- repeating foreground. While effective on individual sections of a song, REPET does not allow for variations in the background (e.g. verse vs. chorus), and is thus limited to short excerpts only. We overcome this limitation and generalize REPET to permit the processing of complete musical tracks. The proposed algorithm tracks the period of the repeating structure and computes local estimates of the background pattern. Separation is performed by soft time-frequency masking, based on the deviation between the current observation and the estimated background pattern. Evaluation on a dataset of 14 complete tracks shows that this method can perform at least as well as a recent competitive music/voice separation method, while being computationally efficient.
Complete list of metadata

Cited literature [15 references]  Display  Hide  Download
Contributor : Antoine Liutkus Connect in order to contact the contributor
Submitted on : Thursday, March 13, 2014 - 4:03:29 PM
Last modification on : Tuesday, March 8, 2022 - 5:46:03 PM
Long-term archiving on: : Friday, June 13, 2014 - 10:41:44 AM


Files produced by the author(s)




Antoine Liutkus, Zafar Rafii, Roland Badeau, Bryan Pardo, Gael Richard. Adaptive filtering for music/voice separation exploiting the repeating musical structure. 37th International Conference on Acoustics, Speech, and Signal Processing ICASSP'12, 2012, Kyoto, Japan. pp.53--56, ⟨10.1109/ICASSP.2012.6287815⟩. ⟨hal-00945300⟩



Record views


Files downloads