Automatic propagation of manual annotations for multimodal person identification in TV shows

Abstract : In this paper an approach to human annotation propagation for person identification in the multimodal context is proposed. A system is used, which combines speaker diarization and face clustering to produce multimodal clusters. The whole multimodal clusters are later annotated rather than just single tracks, which is done by propagation. Optical character recogni- tion systems provides initial annotation. Four different strategies, which select candidates for annotation, are tested. The initial results of annotation propagation are promising. With the use of a proper active learning selection strategy the human annotator involvement could be reduced even further.
Document type :
Conference papers
Complete list of metadatas

Cited literature [16 references]  Display  Hide  Download

https://hal.inria.fr/hal-01002927
Contributor : Laurent Besacier <>
Submitted on : Saturday, June 7, 2014 - 3:01:06 PM
Last modification on : Monday, July 8, 2019 - 3:08:51 PM
Long-term archiving on : Sunday, September 7, 2014 - 10:46:25 AM

File

draft.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01002927, version 1

Citation

Mateusz Budnik, Johann Poignant, Laurent Besacier, Georges Quénot. Automatic propagation of manual annotations for multimodal person identification in TV shows. 12th International Workshop on Content-Based Multimedia Indexing (CBMI), Jun 2014, Klagenfurt, Austria. ⟨hal-01002927⟩

Share

Metrics

Record views

277

Files downloads

2331