Rationale

Video understanding corresponds to the real time process of perceiving, analyzing and elaborating a semantic description of a 3D dynamic scene observed through a network of cameras and possibly other sensors. This process consists mainly in analyzing signal information provided by the sensors observing the scene with a large variety of models which humans usually use to understand the scene or defined purposely.

Computer vision and pattern recognition are the main technologies used for automatic monitoring of public spaces over extended durations. Effective approaches for tracking people, recognizing poses, postures, gestures, or collective crowd phenomena in public environments have been developed in the last 5 years, especially in the video surveillance context, aimed at classifying (suspect, unusual, abnormal) behaviours. However, the core problem of understanding still remains complex and needs to be improved to really address real-world situations.

The main challenge consists in the generation of qualitative and semantic descriptions of people or object motion up to the detailed description of body part configuration even in complex scenes. These goals have become a key task in many computer vision applications, such as image and scene understanding; health-care; video indexing and retrieval; video surveillance and advanced human-computer interaction.

The Key questions to be answer will be:

How far (i.e. more precise, longer activities) can we go with today technologies when analysis people behaviour?

How can we fill the gap between video signal and semantic activities?

Topics

Behaviour2011 will aim at promoting interaction and collaboration among researchers specialising in these related fields (but are by no means limited to):

  • People detection and Tracking
  • Video activity discovery
  • Group of people, crowd analysis
  • Multi-camera and multimodal analysis
  • High-level behaviour recognition and understanding
  • Long term event recognition
  • Use of ontologies on human motion for video footage
  • Browsing, indexing and retrieval of human behaviours in video sequences
  • Natural-language description of human behaviours
  • Cognitive surveillance and ambient intelligence
  • Learning models for behaviour analysis
  • Human behaviour synthesis: articulated models and animation
  • Real-time systems, system evaluation
  • Abnormal event detection
Program committee
  • Jenny Benois-Pineau, University Bordeaux 1, LaBRI
  • Ni Bingbing, ADSC Singapore
  • Group of people, crowd analysis
  • Vittorio Murino, University of Verona, Italy
  • Shuicheng Yan, NUS Singapore
  • Nam Trung Pham, Institute for Infocomm Research, Singapore
  • Wang Yue, Institute for Infocomm Research, Singapore
  • Cyril Carincotte, Multitel, Belgium
  • Paolo Remagnino, Kingston University, United Kingdom
  • Alain Boucher, IFI-AUF, Vietnam
  • Marcos Zúñiga Barraza, Departamento de Electrónica - UTFSM, Chili
Organizers
  • Francois Bremond, INRIA Sophia Antipolis, France
  • Jose Luis Patino Vilchis, INRIA Sophia Antipolis, France
  • Richard P. Chang, Institute for Infocomm Research, Singapore
  • Karianto Leman, Institute for Infocomm Research, Singapore
  • Jean-Marc Odobez, IDIAP, Switzerland

 

logo Inria logo Idiap logo VANAHEIM logo région PACA logo i2r

 

Réalisation Service IST INRIA Sophia Antipolis Méditerranée / Laboratoire I3S