Ontologies For Video Events

Abstract : This report shows how we represent video event knowledge for Automatic Video Interpretation. To solve this issue, we first build an ontology structure to design concepts rela tive to video events. There are two main types of concepts to be represented: physical objects of the observed scene and video events occurring in the scene. A physical object can be a static object (e.g. a desk, a machine) or a mobile object detected by a vision routine (e.g. a person, a car). A video event can be a primitive state, composite state, primitive event or composite event. Primitive states are atoms to build other concepts of the knowledge base of an Auto matic Video Interpretation System. A composed concept (i.e. composite state and event) is represented by a combination of its sub-concepts and possibly a set of events that are not al lowed occurring during the recognition of this concept. We use non-temporal constraints (logi cal, spatial constraint) to specify the physical objects involved in a concept and also temporal constraints including Allen's interval algebra operators to describe relations (e.g. temporal order, duration) between the sub-concepts defined within a composed concept. Secondly, we validate the proposed video event ontology structure by building two ontologies (for Visual Bank and Metro Monitoring) using ORION's Scenario Description Language.
Document type :
Reports
Complete list of metadatas

https://hal.inria.fr/inria-00071397
Contributor : Rapport de Recherche Inria <>
Submitted on : Tuesday, May 23, 2006 - 5:11:19 PM
Last modification on : Tuesday, July 24, 2018 - 3:48:06 PM
Long-term archiving on : Tuesday, February 22, 2011 - 11:55:46 AM

Identifiers

  • HAL Id : inria-00071397, version 1

Collections

Citation

François Bremond, Nicolas Maillot, Monique Thonnat, Van-Thinh Vu. Ontologies For Video Events. RR-5189, INRIA. 2004. ⟨inria-00071397⟩

Share

Metrics

Record views

442

Files downloads

332