Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, EpiSciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
Skip to Main content Skip to Navigation
Conference papers

Collaborative Visual SLAM Framework for a Multi-Robot System

Abstract : This paper presents a framework for collabora-tive visual SLAM using monocular cameras for a team of mobile robots. The robots perform SLAM individually using their on-board processors thereby estimating the seven degrees of freedom (including scale) for the motion of the camera and creating a map of the environment as a pose-graph of keyframes. Each robot communicates to a central server by sending local keyframe information. The central server merges them when a visual overlap is detected in the scene and creates a global map. In the background, the global map is continuously optimized using bundle adjustment techniques and the updated pose information is communicated back as feedback to the individual robots. We present some preliminary experimental results towards testing the framework with two mobile robots in an indoor environment.
Document type :
Conference papers
Complete list of metadata

Cited literature [22 references]  Display  Hide  Download
Contributor : PHILIPPE MARTINET Connect in order to contact the contributor
Submitted on : Wednesday, January 29, 2020 - 12:53:59 PM
Last modification on : Wednesday, April 27, 2022 - 3:48:21 AM
Long-term archiving on: : Thursday, April 30, 2020 - 3:45:36 PM


Files produced by the author(s)


  • HAL Id : hal-02459361, version 1


Nived Chebrolu, David Marquez-Gamez, Philippe Martinet. Collaborative Visual SLAM Framework for a Multi-Robot System. 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Sep 2015, Hamburg, Germany. ⟨hal-02459361⟩



Record views


Files downloads