Listening to features

Manuel Moussallam 1 Antoine Liutkus 2, 3 Laurent Daudet 4
2 MULTISPEECH - Speech Modeling for Facilitating Oral-Based Communication
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
3 PAROLE - Analysis, perception and recognition of speech
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : This work explores nonparametric methods which aim at synthesizing audio from low-dimensionnal acoustic features typically used in MIR frameworks. Several issues prevent this task to be straightforwardly achieved. Such features are designed for analysis and not for synthesis, thus favoring high-level description over easily inverted acoustic representation. Whereas some previous studies already considered the problem of synthesizing audio from features such as Mel-Frequency Cepstral Coefficients, they mainly relied on the explicit formula used to compute those features in order to inverse them. Here, we instead adopt a simple blind approach, where arbitrary sets of features can be used during synthesis and where reconstruction is exemplar-based. After testing the approach on a speech synthesis from well known features problem, we apply it to the more complex task of inverting songs from the Million Song Dataset. What makes this task harder is twofold. First, that features are irregularly spaced in the temporal domain according to an onset-based segmentation. Second the exact method used to compute these features is unknown, although the features for new audio can be computed using their API as a black-box. In this paper, we detail these difficulties and present a framework to nonetheless attempting such synthesis by concatenating audio samples from a training dataset, whose features have been computed beforehand. Samples are selected at the segment level, in the feature space with a simple nearest neighbor search. Additionnal constraints can then be defined to enhance the synthesis pertinence. Preliminary experiments are presented using RWC and GTZAN audio datasets to synthesize tracks from the Million Song Dataset.
Keywords : synthesis features audio
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.inria.fr/hal-01118307
Contributor : Antoine Liutkus <>
Submitted on : Wednesday, February 18, 2015 - 5:14:59 PM
Last modification on : Saturday, November 16, 2019 - 6:56:02 PM
Long-term archiving on: Tuesday, May 19, 2015 - 10:51:18 AM

File

LDM_listening_to_features.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01118307, version 1

Citation

Manuel Moussallam, Antoine Liutkus, Laurent Daudet. Listening to features. [Research Report] Institut Langevin, ESPCI - CNRS - Paris Diderot University - UPMC. 2015, pp.24. ⟨hal-01118307⟩

Share

Metrics

Record views

415

Files downloads

143