Learning Contextual Variations for Video Segmentation

Vincent Martin 1, * Monique Thonnat 1
* Corresponding author
Abstract : This paper deals with video segmentation in vision systems. We focus on the maintenance of background models in long-term videos of changing environment which is still a real challenge in video surveillance. We propose an original weakly supervised method for learning contextual variations in videos. Our approach uses a clustering algorithm to automatically identify different contexts based on image content analysis. Then, state-of-the-art video segmentation algorithms (e.g. codebook, MoG) are trained on each cluster. The goal is to achieve a dynamic selection of background models. We have experimented our approach on a long video sequence (24 hours). The presented results show the segmentation improvement of our approach compared to codebook and MoG.
Document type :
Conference papers
Complete list of metadatas

Cited literature [7 references]  Display  Hide  Download

https://hal.inria.fr/inria-00499631
Contributor : Vincent Martin <>
Submitted on : Sunday, July 11, 2010 - 1:23:24 PM
Last modification on : Saturday, January 27, 2018 - 1:30:44 AM
Long-term archiving on : Tuesday, October 12, 2010 - 10:10:55 AM

File

ICVS08.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Vincent Martin, Monique Thonnat. Learning Contextual Variations for Video Segmentation. International Conference on Computer Vision Systems, May 2008, Patras, Greece. pp.464-473, ⟨10.1007/978-3-540-79547-6_45⟩. ⟨inria-00499631⟩

Share

Metrics

Record views

214

Files downloads

236