Skip to Main content Skip to Navigation
New interface
Conference papers

Learning Contextual Variations for Video Segmentation

Vincent Martin 1, * Monique Thonnat 1 
* Corresponding author
Abstract : This paper deals with video segmentation in vision systems. We focus on the maintenance of background models in long-term videos of changing environment which is still a real challenge in video surveillance. We propose an original weakly supervised method for learning contextual variations in videos. Our approach uses a clustering algorithm to automatically identify different contexts based on image content analysis. Then, state-of-the-art video segmentation algorithms (e.g. codebook, MoG) are trained on each cluster. The goal is to achieve a dynamic selection of background models. We have experimented our approach on a long video sequence (24 hours). The presented results show the segmentation improvement of our approach compared to codebook and MoG.
Document type :
Conference papers
Complete list of metadata

Cited literature [7 references]  Display  Hide  Download
Contributor : Vincent Martin Connect in order to contact the contributor
Submitted on : Sunday, July 11, 2010 - 1:23:24 PM
Last modification on : Friday, February 4, 2022 - 3:23:52 AM
Long-term archiving on: : Tuesday, October 12, 2010 - 10:10:55 AM


Files produced by the author(s)




Vincent Martin, Monique Thonnat. Learning Contextual Variations for Video Segmentation. International Conference on Computer Vision Systems, May 2008, Patras, Greece. pp.464-473, ⟨10.1007/978-3-540-79547-6_45⟩. ⟨inria-00499631⟩



Record views


Files downloads