Deep-Temporal LSTM for Daily Living Action Recognition - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Deep-Temporal LSTM for Daily Living Action Recognition

Srijan Das
  • Fonction : Auteur
  • PersonId : 21855
  • IdHAL : srijan-das
Michal F Koperski
Gianpiero Francesca
  • Fonction : Auteur
  • PersonId : 1023188

Résumé

In this paper, we propose to improve the traditional use of RNNs by employing a many to many model for video classification. We analyze the importance of modeling spatial layout and temporal encoding for daily living action recognition. Many RGB methods focus only on short term temporal information obtained from optical flow. Skeleton based methods on the other hand show that modeling long term skeleton evolution improves action recognition accuracy. In this work, we propose a deep-temporal LSTM architecture which extends standard LSTM and allows better encoding of temporal information. In addition, we propose to fuse 3D skeleton geometry with deep static appearance. We validate our approach on public available CAD60, MSRDai-lyActivity3D and NTU-RGB+D, achieving competitive performance as compared to the state-of-the art.
Fichier principal
Vignette du fichier
avss-2018.pdf (466.78 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01896064 , version 1 (15-10-2018)

Identifiants

  • HAL Id : hal-01896064 , version 1

Citer

Srijan Das, Michal F Koperski, Francois Bremond, Gianpiero Francesca. Deep-Temporal LSTM for Daily Living Action Recognition. 15th IEEE International Conference on Advanced Video and Signal-based Surveillance, Nov 2018, Auckland, New Zealand. ⟨hal-01896064⟩
72 Consultations
274 Téléchargements

Partager

Gmail Facebook X LinkedIn More