Learning Multi-Modal Dictionaries: Application to Audiovisual Data - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2006

Learning Multi-Modal Dictionaries: Application to Audiovisual Data

Abstract

This paper presents a methodology for extracting meaningful synchronous structures from multi-modal signals. Simultaneous processing of multi-modal data can reveal information that is unavailable when handling the sources separately. However, in natural high-dimensional data, the statistical dependencies between modalities are, most of the time, not obvious. Learning fundamental multi-modal patterns is an alternative to classical statistical methods. Typically, recurrent patterns are shift invariant, thus the learning should try to find the best matching filters. We present a new algorithm for iteratively learning multi-modal generating functions that can be shifted at all positions in the signal. The proposed algorithm is applied to audiovisual sequences and it demonstrates to be able to discover underlying structures in the data.
Fichier principal
Vignette du fichier
Monaci2006_1502.pdf (478.8 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

inria-00544773 , version 1 (08-02-2011)

Identifiers

Cite

Gianluca Monaci, Philippe Jost, Pierre Vandergheynst, Boris Mailhé, Sylvain Lesage, et al.. Learning Multi-Modal Dictionaries: Application to Audiovisual Data. Proc. of International Workshop on Multimedia Content Representation, Classification and Security (MCRCS'06), Sep 2006, Istanbul, Turkey. pp.538--545, ⟨10.1007/11848035_71⟩. ⟨inria-00544773⟩
97 View
234 Download

Altmetric

Share

Gmail Facebook X LinkedIn More