Skip to Main content Skip to Navigation
Conference papers

A Framework for Indexing Human Actions in Video

Abstract : Several researchers have addressed the problem of human action recognition using a variety of algorithms. An underlying assumption in most of these algorithms is that action boundaries are already known in a test video sequence. In this paper, we propose a fast method for continuous human action recognition in a video sequence. We propose the use of a low dimensional feature vector which consists of (a) the projections of the width profile of the actor on to a Discrete Cosine Transform (DCT) basis and (b) simple spatio-temporal features. We use an earlier proposed average-template with multiple features for modelling human actions and combine it with One-pass Dynamic Programing (DP) algorithm for continuous action recognition. This model accounts for intra-class variability in the way an action is performed. Furthermore, we demonstrate a way to perform noise robust recognition by creating a noise match condition between the train and the test data. The effectiveness of our method is demonstrated by conducting experiments on the IXMAS dataset of persons performing various actions and an outdoor Action database collected by us.
Document type :
Conference papers
Complete list of metadata

Cited literature [17 references]  Display  Hide  Download
Contributor : Peter Sturm Connect in order to contact the contributor
Submitted on : Sunday, October 5, 2008 - 12:41:10 PM
Last modification on : Monday, October 6, 2008 - 9:40:57 AM
Long-term archiving on: : Friday, June 4, 2010 - 12:12:37 PM


Files produced by the author(s)


  • HAL Id : inria-00326719, version 1



Kaustubh Kulkarni, Srikanth Cherla, Amit Kale, V. Ramasubramanian. A Framework for Indexing Human Actions in Video. The 1st International Workshop on Machine Learning for Vision-based Motion Analysis - MLVMA'08, Oct 2008, Marseille, France. ⟨inria-00326719⟩



Record views


Files downloads