Learning Multi-Modal Dictionaries: Application to Audiovisual Data

Gianluca Monaci 1 Philippe Jost 1 Pierre Vandergheynst 1 Boris Mailhé 2 Sylvain Lesage 2 Rémi Gribonval 2
2 METISS - Speech and sound data modeling and processing
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, Inria Rennes – Bretagne Atlantique
Abstract : This paper presents a methodology for extracting meaningful synchronous structures from multi-modal signals. Simultaneous processing of multi-modal data can reveal information that is unavailable when handling the sources separately. However, in natural high-dimensional data, the statistical dependencies between modalities are, most of the time, not obvious. Learning fundamental multi-modal patterns is an alternative to classical statistical methods. Typically, recurrent patterns are shift invariant, thus the learning should try to find the best matching filters. We present a new algorithm for iteratively learning multi-modal generating functions that can be shifted at all positions in the signal. The proposed algorithm is applied to audiovisual sequences and it demonstrates to be able to discover underlying structures in the data.
Complete list of metadatas

Cited literature [13 references]  Display  Hide  Download

https://hal.inria.fr/inria-00544773
Contributor : Rémi Gribonval <>
Submitted on : Tuesday, February 8, 2011 - 10:35:08 PM
Last modification on : Thursday, June 27, 2019 - 12:22:15 PM
Long-term archiving on : Monday, May 9, 2011 - 2:48:27 AM

File

Monaci2006_1502.pdf
Files produced by the author(s)

Identifiers

Citation

Gianluca Monaci, Philippe Jost, Pierre Vandergheynst, Boris Mailhé, Sylvain Lesage, et al.. Learning Multi-Modal Dictionaries: Application to Audiovisual Data. Proc. of International Workshop on Multimedia Content Representation, Classification and Security (MCRCS'06), Sep 2006, Istanbul, Turkey. pp.538--545, ⟨10.1007/11848035_71⟩. ⟨inria-00544773⟩

Share

Metrics

Record views

250

Files downloads

355