Skip to Main content Skip to Navigation
Conference papers

Integrating Visual and Range Data for Robotic Object Detection

Abstract : The problem of object detection and recognition is a notoriously difficult one, and one that has been the focus of much work in the computer vision and robotics communities. Most work has concentrated on systems that operate purely on visual inputs (i.e., images) and largely ignores other sensor modalities. However, despite the great progress made down this track, the goal of high accuracy object detection for robotic platforms in cluttered real-world environments remains elusive. Instead of relying on information from the image alone, we present a method that exploits the multiple sensor modalities available on a robotic platform. In particular, our method augments a 2-d object detector with 3-d information from a depth sensor to produce a “multi-modal object detector.” We demonstrate our method on a working robotic system and evaluate its performance on a number of common household/office objects.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/inria-00326789
Contributor : Peter Sturm <>
Submitted on : Sunday, October 5, 2008 - 3:57:49 PM
Last modification on : Friday, February 26, 2021 - 9:42:03 AM
Long-term archiving on: : Thursday, June 3, 2010 - 8:14:39 PM

File

1569135672.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00326789, version 1

Collections

Citation

Stephen Gould, Paul Baumstarck, Morgan Quigley, Andrew Y. Ng, Daphne Koller. Integrating Visual and Range Data for Robotic Object Detection. Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications - M2SFA2 2008, Andrea Cavallaro and Hamid Aghajan, Oct 2008, Marseille, France. ⟨inria-00326789⟩

Share

Metrics

Record views

714

Files downloads

622