Abstract : Our goal is to revisit rank order coding by proposing an original exact decoding procedure for it. Rank order coding was proposed by Simon Thorpe et al. to explain the impressive performance of the human visual system in recognizing objects. It is based on the hypothesis that the retina represents the visual stimulus by the order in which its cells are activated. A classical rank order coder/decoder was then designed (Van Rullen and Thorpe, 2001) involving three stages: (i) A model of the stimulus transform in the retina consisting in a redundant filter bank analysis; (ii) A sorting stage of the filters according to their activation degree; (iii) A straightforward decoding procedure that consists in a weighted sum of the most activated filters. Focusing on this last stage, it appears that the decoding procedure employed yields reconstruction errors that limit the model Rate/Quality performances when used as an image codec. Attempts made in the literature to overcome this issue are time consuming and alter the coding procedure or are infeasible for standard size images and lacking mathematical support. Here we solve this problem in an original fashion by using the frames theory, where a frame of a vector space designates an extension for the notion of basis. Our contribution is threefold. First we add an adequate scaling function to the filter bank under study that has both a mathematical and a biological justification. Second, we prove that the analyzing filter bank considered is a frame, and then we define the corresponding dual frame that is necessary for the exact image reconstruction. Finally, to deal with the problem of memory overhead, we design an original recursive out-of-core blockwise algorithm for the computation of this dual frame. Our work provides a mathematical formalism for the retinal model under study and specifies a simple and exact reverse transform for it. Furthermore, the framework presented here can be extended to several models of the visual cortical areas using redundant representations.