Robust vision-based underwater homing using self-similar landmarks

Abstract : Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Document type :
Journal articles
Complete list of metadatas

Cited literature [24 references]  Display  Hide  Download

https://hal.inria.fr/inria-00335278
Contributor : Amaury Nègre <>
Submitted on : Wednesday, October 29, 2008 - 9:43:52 AM
Last modification on : Thursday, October 11, 2018 - 8:48:02 AM
Long-term archiving on : Tuesday, June 28, 2011 - 5:33:05 PM

File

JFR.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00335278, version 1

Collections

Citation

Amaury Nègre, Cédric Pradalier, Matthew Dunbabin. Robust vision-based underwater homing using self-similar landmarks. Journal of Field Robotics, Wiley, 2008, Special Issue on Field and Service Robotics, 25 (6-7), pp.360-377. ⟨inria-00335278⟩

Share

Metrics

Record views

343

Files downloads

674