Audio-Visual Multiple-Speaker Tracking for Robot Perception

Yutong Ban 1
1 PERCEPTION - Interpretation and Modelling of Images and Videos
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
Abstract : Robot perception plays a crucial role in human-robot interaction (HRI). The perception system provides the robot with information of the surroundings and enables it to interact with people. In a conversational scenario, a group of people may chat in front of the robot and move freely. In such situations, robots are expected to understand where the people are, who is speaking, or what they are talking about. This thesis concentrates on answering the first two questions, namely speaker tracking and diarization. To that end, we use different modalities of the robot’s perception system. Similar to seeing and hearing for humans, audio and visual information are critical cues for robots in a conversational scenario. Advancements in computer vision and audio processing in the last decade revolutionized robot perception abilities and enabled us to build joint audio-visual applications. In this thesis, we present the following contributions: we first develop a variational Bayesian framework for tracking multiple objects. The variational Bayesian framework provides closed-form tractable problem solutions, enabling an efficient tracking process. The framework is first applied to visual multiple-person tracking. The birth and death processes are built jointly to deal with the varying number of people in the scene. We then augment the framework by exploiting the complementarity of vision and robot motor information. On the one hand, the robot’s active motion can be integrated into the visual tracking system to stabilize the tracking. On the other hand, visual information can be used to perform motor servoing. As a next step we combine audio and visual information in the framework and exploit the association between the acoustic feature frequency bins with tracked people, to estimate the smooth trajectories of people, and to infer their acoustic status (i.e. speaking or silent). To adapt the framework to applications with no vision information, we employ it to acoustic-only speaker localization and tracking. Online dereverberation techniques are first applied then followed by the tracking system. Finally, we propose a variant of the acoustic-only tracking model based on the von-Mises distribution, which is specifically adapted to directional data. All proposed methods are validated on datasets both qualitatively and quantitatively.
Complete list of metadatas

Cited literature [181 references]  Display  Hide  Download

https://hal.inria.fr/tel-02163418
Contributor : Team Perception <>
Submitted on : Thursday, July 4, 2019 - 2:53:51 PM
Last modification on : Monday, July 8, 2019 - 11:57:12 AM

File

Thesis_Ban.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : tel-02163418, version 2

Collections

Citation

Yutong Ban. Audio-Visual Multiple-Speaker Tracking for Robot Perception. Computer Vision and Pattern Recognition [cs.CV]. Université Grenoble - Alpes, 2019. English. ⟨tel-02163418v2⟩

Share

Metrics

Record views

93

Files downloads

356