Integrating Visual and Range Data for Robotic Object Detection

Abstract : The problem of object detection and recognition is a notoriously difficult one, and one that has been the focus of much work in the computer vision and robotics communities. Most work has concentrated on systems that operate purely on visual inputs (i.e., images) and largely ignores other sensor modalities. However, despite the great progress made down this track, the goal of high accuracy object detection for robotic platforms in cluttered real-world environments remains elusive. Instead of relying on information from the image alone, we present a method that exploits the multiple sensor modalities available on a robotic platform. In particular, our method augments a 2-d object detector with 3-d information from a depth sensor to produce a “multi-modal object detector.” We demonstrate our method on a working robotic system and evaluate its performance on a number of common household/office objects.
Type de document :
Communication dans un congrès
Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications - M2SFA2 2008, Oct 2008, Marseille, France. 2008
Liste complète des métadonnées

https://hal.inria.fr/inria-00326789
Contributeur : Peter Sturm <>
Soumis le : dimanche 5 octobre 2008 - 15:57:49
Dernière modification le : lundi 6 octobre 2008 - 09:02:56
Document(s) archivé(s) le : jeudi 3 juin 2010 - 20:14:39

Fichier

1569135672.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : inria-00326789, version 1

Collections

Citation

Stephen Gould, Paul Baumstarck, Morgan Quigley, Andrew Y. Ng, Daphne Koller. Integrating Visual and Range Data for Robotic Object Detection. Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications - M2SFA2 2008, Oct 2008, Marseille, France. 2008. 〈inria-00326789〉

Partager

Métriques

Consultations de la notice

506

Téléchargements de fichiers

380