Listening to features

Manuel Moussallam 1 Antoine Liutkus 2, 3 Laurent Daudet 4
2 MULTISPEECH - Speech Modeling for Facilitating Oral-Based Communication
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
3 PAROLE - Analysis, perception and recognition of speech
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : This work explores nonparametric methods which aim at synthesizing audio from low-dimensionnal acoustic features typically used in MIR frameworks. Several issues prevent this task to be straightforwardly achieved. Such features are designed for analysis and not for synthesis, thus favoring high-level description over easily inverted acoustic representation. Whereas some previous studies already considered the problem of synthesizing audio from features such as Mel-Frequency Cepstral Coefficients, they mainly relied on the explicit formula used to compute those features in order to inverse them. Here, we instead adopt a simple blind approach, where arbitrary sets of features can be used during synthesis and where reconstruction is exemplar-based. After testing the approach on a speech synthesis from well known features problem, we apply it to the more complex task of inverting songs from the Million Song Dataset. What makes this task harder is twofold. First, that features are irregularly spaced in the temporal domain according to an onset-based segmentation. Second the exact method used to compute these features is unknown, although the features for new audio can be computed using their API as a black-box. In this paper, we detail these difficulties and present a framework to nonetheless attempting such synthesis by concatenating audio samples from a training dataset, whose features have been computed beforehand. Samples are selected at the segment level, in the feature space with a simple nearest neighbor search. Additionnal constraints can then be defined to enhance the synthesis pertinence. Preliminary experiments are presented using RWC and GTZAN audio datasets to synthesize tracks from the Million Song Dataset.
Keywords : synthesis features audio
Type de document :
Rapport
[Research Report] Institut Langevin, ESPCI - CNRS - Paris Diderot University - UPMC. 2015, pp.24
Liste complète des métadonnées

Littérature citée [25 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01118307
Contributeur : Antoine Liutkus <>
Soumis le : mercredi 18 février 2015 - 17:14:59
Dernière modification le : vendredi 31 août 2018 - 09:12:33
Document(s) archivé(s) le : mardi 19 mai 2015 - 10:51:18

Fichier

LDM_listening_to_features.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01118307, version 1

Citation

Manuel Moussallam, Antoine Liutkus, Laurent Daudet. Listening to features. [Research Report] Institut Langevin, ESPCI - CNRS - Paris Diderot University - UPMC. 2015, pp.24. 〈hal-01118307〉

Partager

Métriques

Consultations de la notice

351

Téléchargements de fichiers

106