Discovering Primitive Action Categories by Leveraging Relevant Visual Context - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2008

Discovering Primitive Action Categories by Leveraging Relevant Visual Context

Résumé

Under the bag-of-features framework we aim to learn primitive action categories from video without supervision by leveraging relevant visual context in addition to motion features. We define visual context as the appearance of the entire scene including the actor, related objects and relevant background features. To leverage visual context along with motion features, we learn a bi-modal latent variable model to discover action categories without supervision. Our experiments show that the combination of relevant visual context and motion features improves the performance of action discovery. Furthermore, we show that our method is able to leverage relevant visual features for action discovery despite the presence of irrelevant background objects.
Fichier principal
Vignette du fichier
VS2008-Poster-a.pdf (905.99 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00325777 , version 1 (30-09-2008)

Identifiants

  • HAL Id : inria-00325777 , version 1

Citer

Kris M. Kitani, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto. Discovering Primitive Action Categories by Leveraging Relevant Visual Context. The Eighth International Workshop on Visual Surveillance - VS2008, Graeme Jones and Tieniu Tan and Steve Maybank and Dimitrios Makris, Oct 2008, Marseille, France. ⟨inria-00325777⟩

Collections

VS2008
74 Consultations
170 Téléchargements

Partager

Gmail Facebook X LinkedIn More