A flexible model for training action localization with varying levels of supervision - Archive ouverte HAL Access content directly
Conference Papers Year :

A flexible model for training action localization with varying levels of supervision

(1, 2) , (1) , (1) , (2)
1
2

Abstract

Spatio-temporal action detection in videos is typically addressed in a fully-supervised setup with manual annotation of training videos required at every frame. Since such annotation is extremely tedious and prohibits scalability, there is a clear need to minimize the amount of manual supervision. In this work we propose a unifying framework that can handle and combine varying types of less-demanding weak supervision. Our model is based on discriminative clustering and integrates different types of supervision as constraints on the optimization. We investigate applications of such a model to training setups with alternative supervisory signals ranging from video-level class labels to the full per-frame annotation of action bounding boxes. Experiments on the challenging UCF101-24 and DALY datasets demonstrate competitive performance of our method at a fraction of supervision used by previous methods. The flexibility of our model enables joint learning from data with different levels of annotation. Experimental results demonstrate a significant gain by adding a few fully supervised examples to otherwise weakly labeled videos.

Dates and versions

hal-01937002 , version 1 (27-11-2018)

Identifiers

Cite

Guilhem Chéron, Jean-Baptiste Alayrac, Ivan Laptev, Cordelia Schmid. A flexible model for training action localization with varying levels of supervision. NIPS 2018 - 32nd Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. pp.1-17. ⟨hal-01937002⟩
167 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More