Ontologies For Video Events

Abstract : This report shows how we represent video event knowledge for Automatic Video Interpretation. To solve this issue, we first build an ontology structure to design concepts rela tive to video events. There are two main types of concepts to be represented: physical objects of the observed scene and video events occurring in the scene. A physical object can be a static object (e.g. a desk, a machine) or a mobile object detected by a vision routine (e.g. a person, a car). A video event can be a primitive state, composite state, primitive event or composite event. Primitive states are atoms to build other concepts of the knowledge base of an Auto matic Video Interpretation System. A composed concept (i.e. composite state and event) is represented by a combination of its sub-concepts and possibly a set of events that are not al lowed occurring during the recognition of this concept. We use non-temporal constraints (logi cal, spatial constraint) to specify the physical objects involved in a concept and also temporal constraints including Allen's interval algebra operators to describe relations (e.g. temporal order, duration) between the sub-concepts defined within a composed concept. Secondly, we validate the proposed video event ontology structure by building two ontologies (for Visual Bank and Metro Monitoring) using ORION's Scenario Description Language.
Type de document :
RR-5189, INRIA. 2004
Liste complète des métadonnées

Contributeur : Rapport de Recherche Inria <>
Soumis le : mardi 23 mai 2006 - 17:11:19
Dernière modification le : samedi 27 janvier 2018 - 01:31:29
Document(s) archivé(s) le : mardi 22 février 2011 - 11:55:46



  • HAL Id : inria-00071397, version 1



François Bremond, Nicolas Maillot, Monique Thonnat, Van-Thinh Vu. Ontologies For Video Events. RR-5189, INRIA. 2004. 〈inria-00071397〉



Consultations de la notice


Téléchargements de fichiers