Skip to Main content Skip to Navigation
Conference papers

Estimating Human Pose with Flowing Puppets

Abstract : We address the problem of upper-body human pose estimation in uncontrolled monocular video sequences, without manual initialization. Most current methods focus on isolated video frames and often fail to correctly localize arms and hands. Inferring pose over a video sequence is advantageous because poses of people in adjacent frames exhibit properties of smooth variation due to the nature of human and camera motion. To exploit this, previous methods have used prior knowledge about distinctive actions or generic temporal priors combined with static image likelihoods to track people in motion. Here we take a different approach based on a simple observation: Information about how a person moves from frame to frame is present in the optical flow field. We develop an approach for tracking articulated motions that "links" articulated shape models of people in adjacent frames through the dense optical flow. Key to this approach is a 2D shape model of the body that we use to compute how the body moves over time. The resulting "flowing puppets" provide a way of integrating image evidence across frames to improve pose inference. We apply our method on a challenging dataset of TV video sequences and show state-of-the-art performance.
Complete list of metadatas

Cited literature [22 references]  Display  Hide  Download
Contributor : Thoth Team <>
Submitted on : Wednesday, November 20, 2013 - 1:04:39 PM
Last modification on : Thursday, March 26, 2020 - 8:49:27 PM
Document(s) archivé(s) le : Friday, February 21, 2014 - 4:28:43 AM


Files produced by the author(s)




Silvia Zuffi, Javier Romero, Cordelia Schmid, Michael J. Black. Estimating Human Pose with Flowing Puppets. ICCV - IEEE International Conference on Computer Vision, Dec 2013, Sydney, Australia. pp.3312-3319, ⟨10.1109/ICCV.2013.411⟩. ⟨hal-00906800⟩



Record views


Files downloads