Visual data fusion for objects localization by active vision - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2002

Visual data fusion for objects localization by active vision

Abstract

Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this article, we present a suitable scene knowl- edge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochas- tic and set membership models. We consider that, for a large class of ap- plications, an approximated representation is sufficient to build a prelim- inary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellip- soidal over or inner bounding for uniform laws. These approximations allow us to build an efficient estimation process integrating visual data on line. Based on this estimation scheme, optimal exploratory motions of the camera can be automatically determined. Real time experimental results validating our approach are finally given.
Fichier principal
Vignette du fichier
2002_eccv_flandin.pdf (302.22 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

inria-00352090 , version 1 (12-01-2009)

Identifiers

  • HAL Id : inria-00352090 , version 1

Cite

Grégory Flandin, François Chaumette. Visual data fusion for objects localization by active vision. Eur. Conf. on Computer Vision, ECCV'02, LNCS 2353, 2002, Copenhagen, Denmark, Denmark. pp.312-326. ⟨inria-00352090⟩
125 View
274 Download

Share

Gmail Facebook X LinkedIn More