A robust and efficient video representation for action recognition

Heng Wang 1 Dan Oneata 2 Jakob Verbeek 3, 2 Cordelia Schmid 2
2 LEAR - Learning and recognition in vision
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
3 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
Abstract : This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense tra-jectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations , considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to bag-of-words encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results.
Type de document :
Article dans une revue
International Journal of Computer Vision, Springer Verlag, 2016, 119 (3), pp.219-238. <10.1007/s11263-015-0846-5>
Liste complète des métadonnées


https://hal.inria.fr/hal-01145834
Contributeur : Thoth Team <>
Soumis le : vendredi 24 juillet 2015 - 11:19:31
Dernière modification le : mardi 22 novembre 2016 - 13:15:09
Document(s) archivé(s) le : dimanche 25 octobre 2015 - 10:21:24

Fichiers

IJCV.hal.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

Collections

Citation

Heng Wang, Dan Oneata, Jakob Verbeek, Cordelia Schmid. A robust and efficient video representation for action recognition. International Journal of Computer Vision, Springer Verlag, 2016, 119 (3), pp.219-238. <10.1007/s11263-015-0846-5>. <hal-01145834>

Partager

Métriques

Consultations de
la notice

1086

Téléchargements du document

2875