Skip to Main content Skip to Navigation
Conference papers

Spatial-Temporal Neural Networks for Action Recognition

Abstract : Action recognition is an important yet challenging problem in many applications. Recently, neural network and deep learning approaches have been widely applied to action recognition and yielded impressive results. In this paper, we present a spatial-temporal neural network model to recognize human actions in videos. This network is composed of two connected structures. A two-stream-based network extracts appearance and optical flow features from video frames. This network characterizes spatial information of human actions in videos. A group of LSTM structures following the spatial network describe the temporal information of human actions. We test our model with data from two public datasets and the experimental results show that our method improves the action recognition accuracy compared to the baseline methods.
Document type :
Conference papers
Complete list of metadata

Cited literature [36 references]  Display  Hide  Download

https://hal.inria.fr/hal-01821062
Contributor : Hal Ifip <>
Submitted on : Friday, June 22, 2018 - 11:45:34 AM
Last modification on : Wednesday, June 10, 2020 - 10:00:04 AM
Long-term archiving on: : Tuesday, September 25, 2018 - 3:59:33 PM

File

467708_1_En_52_Chapter.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Chao Jing, Ping Wei, Hongbin Sun, Nanning Zheng. Spatial-Temporal Neural Networks for Action Recognition. 14th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2018, Rhodes, Greece. pp.619-627, ⟨10.1007/978-3-319-92007-8_52⟩. ⟨hal-01821062⟩

Share

Metrics

Record views

440

Files downloads

3