Exploiting the Complementarity of Audio and Visual Data in Multi-Speaker Tracking

Yutong Ban 1 Laurent Girin 1, 2 Xavier Alameda-Pineda 1 Radu Horaud 1
1 PERCEPTION - Interpretation and Modelling of Images and Videos
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
2 GIPSA-CRISSP - CRISSP
GIPSA-DPC - Département Parole et Cognition
Abstract : Multi-speaker tracking is a central problem in human-robot interaction. In this context, exploiting auditory and visual information is gratifying and challenging at the same time. Gratifying because the complementary nature of auditory and visual information allows us to be more robust against noise and outliers than unimodal approaches. Challenging because how to properly fuse auditory and visual information for multi-speaker tracking is far from being a solved problem. In this paper we propose a probabilistic generative model that tracks multiple speakers by jointly exploiting auditory and visual features in their own representation spaces. Importantly, the method is robust to missing data and is therefore able to track even when observations from one of the modalities are absent. Quantitative and qualitative results on the AVDIAR dataset are reported.
Liste complète des métadonnées

Cited literature [21 references]  Display  Hide  Download


https://hal.inria.fr/hal-01577965
Contributor : Team Perception <>
Submitted on : Monday, August 28, 2017 - 3:07:40 PM
Last modification on : Thursday, March 14, 2019 - 1:19:49 AM

Files

ICCVW_submission.pdf
Files produced by the author(s)

Identifiers

Citation

Yutong Ban, Laurent Girin, Xavier Alameda-Pineda, Radu Horaud. Exploiting the Complementarity of Audio and Visual Data in Multi-Speaker Tracking. ICCV Workshop on Computer Vision for Audio-Visual Media, Oct 2017, Venise, Italy. pp.446-454, ⟨10.1109/ICCVW.2017.60⟩. ⟨hal-01577965⟩

Share

Metrics

Record views

710

Files downloads

276