Skip to Main content Skip to Navigation
Conference papers

Collaborative Visual SLAM Framework for a Multi-Robot System

Abstract : This paper presents a framework for collabora-tive visual SLAM using monocular cameras for a team of mobile robots. The robots perform SLAM individually using their on-board processors thereby estimating the seven degrees of freedom (including scale) for the motion of the camera and creating a map of the environment as a pose-graph of keyframes. Each robot communicates to a central server by sending local keyframe information. The central server merges them when a visual overlap is detected in the scene and creates a global map. In the background, the global map is continuously optimized using bundle adjustment techniques and the updated pose information is communicated back as feedback to the individual robots. We present some preliminary experimental results towards testing the framework with two mobile robots in an indoor environment.
Document type :
Conference papers
Complete list of metadata

Cited literature [22 references]  Display  Hide  Download

https://hal.inria.fr/hal-02459361
Contributor : Philippe Martinet <>
Submitted on : Wednesday, January 29, 2020 - 12:53:59 PM
Last modification on : Tuesday, January 5, 2021 - 4:26:08 PM
Long-term archiving on: : Thursday, April 30, 2020 - 3:45:36 PM

File

PPNIV15Nived.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02459361, version 1

Citation

Nived Chebrolu, David Marquez-Gamez, Philippe Martinet. Collaborative Visual SLAM Framework for a Multi-Robot System. 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Sep 2015, Hamburg, Germany. ⟨hal-02459361⟩

Share

Metrics

Record views

115

Files downloads

470