Unsupervised discovery of human activities from long-time videos

Abstract : In this paper, we propose a complete framework based on a Hierarchical Activity Models (HAMs) to understand and recognise Activities of Daily Living (ADL) in unstructured scenes. At each particular time of a long-time video, the framework extracts a set of space-time trajectory features describing the global position of an observed person and the motion of his/her body parts. Human motion information is gathered in a new feature that we call Perceptual Feature Chunks (PFC). The set of PFC is used to learn, in an unsupervised way, particular regions of the scene (topology) where the important activities occur. Using topologies and PFCs, we break the video into a set of small events (\textit{Primitive Events}) that have a semantic meaning. The sequences of \textit{Primitive Events} and topologies are used to construct hierarchical models for activities. The proposed approach has been experimented in the medical field application to monitor patients suffering from Alzheimer and dementia. We have compared our approach with our previous study and a rule-based approach. Experimental results show that the framework achieves better performance than existing works and has a potential to be used as a monitoring tool in medical field applications.
Type de document :
Article dans une revue
IET Computer Vision, IET, 2015, pp.1
Liste complète des métadonnées

Contributeur : Serhan Cosar <>
Soumis le : jeudi 5 mars 2015 - 16:51:01
Dernière modification le : mardi 24 juillet 2018 - 15:48:17
Document(s) archivé(s) le : samedi 6 juin 2015 - 11:11:49


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01123895, version 1



Salma Elloumi, Serhan Cosar, Guido Pusiol, Francois Bremond, Monique Thonnat. Unsupervised discovery of human activities from long-time videos. IET Computer Vision, IET, 2015, pp.1. 〈hal-01123895〉



Consultations de la notice


Téléchargements de fichiers