Automatic propagation of manual annotations for multimodal person identification in TV shows - Archive ouverte HAL Access content directly
Conference Papers Year : 2014

Automatic propagation of manual annotations for multimodal person identification in TV shows

(1) , (1) , (2) , (3)
1
2
3

Abstract

In this paper an approach to human annotation propagation for person identification in the multimodal context is proposed. A system is used, which combines speaker diarization and face clustering to produce multimodal clusters. The whole multimodal clusters are later annotated rather than just single tracks, which is done by propagation. Optical character recogni- tion systems provides initial annotation. Four different strategies, which select candidates for annotation, are tested. The initial results of annotation propagation are promising. With the use of a proper active learning selection strategy the human annotator involvement could be reduced even further.
Fichier principal
Vignette du fichier
draft.pdf (336.32 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01002927 , version 1 (07-06-2014)

Identifiers

  • HAL Id : hal-01002927 , version 1

Cite

Mateusz Budnik, Johann Poignant, Laurent Besacier, Georges Quénot. Automatic propagation of manual annotations for multimodal person identification in TV shows. 12th International Workshop on Content-Based Multimedia Indexing (CBMI), Jun 2014, Klagenfurt, Austria. ⟨hal-01002927⟩
164 View
2109 Download

Share

Gmail Facebook Twitter LinkedIn More