Skip to Main content Skip to Navigation
Conference papers

A Reproducible Research Framework for Audio Inpainting

Amir Adler 1 Valentin Emiya 2 Maria Jafari 3 Michael Elad 1 Rémi Gribonval 2 Mark D. Plumbley 3
2 METISS - Speech and sound data modeling and processing
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, Inria Rennes – Bretagne Atlantique
Abstract : We introduce a unified framework for the restoration of distorted audio data, leveraging the Image Inpainting concept and covering existing audio applications. In this framework, termed Audio Inpainting, the distorted data is considered missing and its location is assumed to be known. We further introduce baseline approaches based on sparse representations. For this new audio inpainting concept, we provide reproducible-research tools including: the handling of audio inpainting tasks as inverse problems, embedded in a frame-based scheme similar to patch-based image processing; several experimental settings; speech and music material; OMP-like algorithms, with two dictionaries, for general audio inpainting or specifically-enhanced declipping.
Complete list of metadata

Cited literature [2 references]  Display  Hide  Download
Contributor : Valentin Emiya Connect in order to contact the contributor
Submitted on : Monday, June 20, 2011 - 6:15:19 PM
Last modification on : Thursday, January 20, 2022 - 4:18:44 PM
Long-term archiving on: : Wednesday, September 21, 2011 - 2:20:31 AM


Files produced by the author(s)


  • HAL Id : inria-00587688, version 1


Amir Adler, Valentin Emiya, Maria Jafari, Michael Elad, Rémi Gribonval, et al.. A Reproducible Research Framework for Audio Inpainting. Workshop on Signal Processing with Adaptive Sparse Structured Representations, Jun 2011, Edinburgh, United Kingdom. ⟨inria-00587688⟩



Les métriques sont temporairement indisponibles