Abstract : This paper presents a novel method and innovative apparatus for building 3D dense visual maps of large-scale unstructured environments for autonomous navigation and real-time localisation. The main contribution of the paper is focused on proposing an efficient and accurate 3D world representation that allows to extend the boundaries of state-of-the-art dense visual mapping to large-scales. This is achieved via an omni-directional key-frame rep- resentation of the environment which is able to synthesise photo-realistic views of captured environments at arbitrary locations. Locally the representation is image-based (ego-centric) and is composed of accurate augmented spherical panoramas combining photometric infor- mation (RGB), depth information (D) and saliency for all viewing directions at a particular point in space (i.e. a point in the light field). The spheres are related by a graph of 6 degrees of freedom poses (3 dof translation and 3 dof rotation) which are estimated through multi-view spherical registration. It is shown that this world representation can be used to perform robust real-time localisation (in 6 dof) of any configuration of visual sensors within their environment whether they be monocular, stereo or multi-view. Contrary to feature-based approaches, an efficient direct image registration technique is formulated. This approach directly exploits the advantages of the spherical representation by minimising a photometric error between a current image and a reference sphere. Two novel multi-camera acquisition systems have been developed and calibrated to acquire this information and this paper reports for the first time the second system. Given the robustness and efficiency of this representation, field experiments demonstrating autonomous navigation and large-scale mapping will be reported in detail for challenging unstructured environments, containing vegetation, pedestrians, varying illumination conditions, trams and dense traffic.