Visual Data Fusion : Application to Objects Localization and Exploration - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Rapport (Rapport De Recherche) Année : 2001

Visual Data Fusion : Application to Objects Localization and Exploration

Résumé

Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this report, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. With these approximations, we coarsely model objects by their including ellipsoid. Then we build an efficient estimation process integrating visual data online in order to refine the location and approximated shape of the objects. Based on this estimation scheme, we perform online and optimal exploratory motions for the camera.

Domaines

Autre [cs.OH]
Fichier principal
Vignette du fichier
RR-4168.pdf (1.01 Mo) Télécharger le fichier

Dates et versions

inria-00072454 , version 1 (24-05-2006)

Identifiants

  • HAL Id : inria-00072454 , version 1

Citer

Grégory Flandin, François Chaumette. Visual Data Fusion : Application to Objects Localization and Exploration. [Research Report] RR-4168, INRIA. 2001. ⟨inria-00072454⟩
164 Consultations
78 Téléchargements

Partager

Gmail Facebook X LinkedIn More