Skip to Main content Skip to Navigation
Journal articles

Variational Bayesian Inference for Audio-Visual Tracking of Multiple Speakers

Yutong Ban 1 Xavier Alameda-Pineda 1 Laurent Girin 2, 1 Radu Horaud 1
1 PERCEPTION - Interpretation and Modelling of Images and Videos
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
2 GIPSA-CRISSP - CRISSP
GIPSA-DPC - Département Parole et Cognition, GIPSA-PPC - GIPSA Pôle Parole et Cognition
Abstract : In this paper we address the problem of tracking multiple speakers via the fusion of visual and auditory information. We propose to exploit the complementary nature and roles of these two modalities in order to accurately estimate smooth trajectories of the tracked persons, to deal with the partial or total absence of one of the modalities over short periods of time, and to estimate the acoustic status - either speaking or silent - of each tracked person over time. We propose to cast the problem at hand into a generative audio-visual fusion (or association) model formulated as a latent-variable temporal graphical model. This may well be viewed as the problem of maximizing the posterior joint distribution of a set of continuous and discrete latent variables given the past and current observations, which is intractable. We propose a variational inference model which amounts to approximate the joint distribution with a factorized distribution. The solution takes the form of a closed-form expectation maximization procedure. We describe in detail the inference algorithm, we evaluate its performance and we compare it with several baseline methods. These experiments show that the proposed audio-visual tracker performs well in informal meetings involving a time-varying number of people.
Complete list of metadatas

Cited literature [41 references]  Display  Hide  Download


https://hal.inria.fr/hal-01950866
Contributor : Team Perception <>
Submitted on : Monday, November 4, 2019 - 1:37:00 PM
Last modification on : Thursday, March 26, 2020 - 8:49:36 PM

Files

BAN_PAMI_V3 (1).pdf
Files produced by the author(s)

Identifiers

Citation

Yutong Ban, Xavier Alameda-Pineda, Laurent Girin, Radu Horaud. Variational Bayesian Inference for Audio-Visual Tracking of Multiple Speakers. IEEE Transactions on Pattern Analysis and Machine Intelligence, Institute of Electrical and Electronics Engineers, 2019, 42, pp.1-17. ⟨10.1109/TPAMI.2019.2953020⟩. ⟨hal-01950866v2⟩

Share

Metrics

Record views

174

Files downloads

762