Skip to Main content Skip to Navigation
Journal articles

Robust vision-based underwater homing using self-similar landmarks

Abstract : Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Document type :
Journal articles
Complete list of metadata

Cited literature [24 references]  Display  Hide  Download
Contributor : Amaury Nègre Connect in order to contact the contributor
Submitted on : Wednesday, October 29, 2008 - 9:43:52 AM
Last modification on : Saturday, June 25, 2022 - 7:36:05 PM
Long-term archiving on: : Tuesday, June 28, 2011 - 5:33:05 PM


Files produced by the author(s)


  • HAL Id : inria-00335278, version 1



Amaury Nègre, Cédric Pradalier, Matthew Dunbabin. Robust vision-based underwater homing using self-similar landmarks. Journal of Field Robotics, Wiley, 2008, Special Issue on Field and Service Robotics, 25 (6-7), pp.360-377. ⟨inria-00335278⟩



Record views


Files downloads