Skip to Main content Skip to Navigation
New interface

Ontologies For Video Events

Abstract : This report shows how we represent video event knowledge for Automatic Video Interpretation. To solve this issue, we first build an ontology structure to design concepts rela tive to video events. There are two main types of concepts to be represented: physical objects of the observed scene and video events occurring in the scene. A physical object can be a static object (e.g. a desk, a machine) or a mobile object detected by a vision routine (e.g. a person, a car). A video event can be a primitive state, composite state, primitive event or composite event. Primitive states are atoms to build other concepts of the knowledge base of an Auto matic Video Interpretation System. A composed concept (i.e. composite state and event) is represented by a combination of its sub-concepts and possibly a set of events that are not al lowed occurring during the recognition of this concept. We use non-temporal constraints (logi cal, spatial constraint) to specify the physical objects involved in a concept and also temporal constraints including Allen's interval algebra operators to describe relations (e.g. temporal order, duration) between the sub-concepts defined within a composed concept. Secondly, we validate the proposed video event ontology structure by building two ontologies (for Visual Bank and Metro Monitoring) using ORION's Scenario Description Language.
Document type :
Complete list of metadata
Contributor : Rapport De Recherche Inria Connect in order to contact the contributor
Submitted on : Tuesday, May 23, 2006 - 5:11:19 PM
Last modification on : Friday, February 4, 2022 - 3:16:22 AM
Long-term archiving on: : Tuesday, February 22, 2011 - 11:55:46 AM


  • HAL Id : inria-00071397, version 1



François Bremond, Nicolas Maillot, Monique Thonnat, Van-Thinh Vu. Ontologies For Video Events. RR-5189, INRIA. 2004. ⟨inria-00071397⟩



Record views


Files downloads