Detecting Parts for Action Localization - Archive ouverte HAL Access content directly
Conference Papers Year : 2017

Detecting Parts for Action Localization

(1) , (1) , (1) , (1)
1

Abstract

In this paper, we propose a new framework for action localization that tracks people in videos and extracts full-body human tubes, i.e., spatio-temporal regions localizing actions, even in the case of occlusions or truncations. This is achieved by training a novel human part detector that scores visible parts while regressing full-body bounding boxes. The core of our method is a convolutional neural network which learns part proposals specific to certain body parts. These are then combined to detect people robustly in each frame. Our tracking algorithm connects the image detections temporally to extract full-body human tubes. We apply our new tube extraction method on the problem of human action localization, on the popular JHMDB dataset, and a very recent challenging dataset DALY (Daily Action Localization in YouTube), showing state-of-the-art results.
Fichier principal
Vignette du fichier
0159.pdf (3.79 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01573629 , version 1 (10-08-2017)

Identifiers

Cite

Nicolas Chesneau, Grégory Rogez, Karteek Alahari, Cordelia Schmid. Detecting Parts for Action Localization. BMVC - British Machine Vision Conference, Sep 2017, London, United Kingdom. ⟨hal-01573629⟩
225 View
140 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More