1471-2202-15-S1-P121 1471-2202 Poster presentation <p>Microsaccades enable efficient synchrony-based visual feature learning and detection</p> MasquelierTimothéetimothee.masquelier@alum.mit.edu PortelliGeoffrey KornprobstPierre

Institut de la Vision, UPMC Université Paris 06, Paris, 75012, France

CNRS, UMR 7210, Paris, 75012, France

Neuromathcomp Project Team, Inria Sophia Antipolis Méditerranée, 06902, France

BMC Neuroscience <p>Abstracts from the Twenty Third Annual Computational Neuroscience Meeting: CNS*2014</p>The publication charges for this supplement were funded by the Organization for Computational Neurosciences.Meeting abstracts - A single PDF containing all abstracts in this supplement is available here.<p>The Twenty Third Annual Computational Neuroscience Meeting: CNS*2014</p>Québec City, Canada26-31 July 2014http://www.cnsorg.org/cns-20141471-2202 2014 15 Suppl 1 P121 http://www.biomedcentral.com/1471-2202/15/S1/P121 10.1186/1471-2202-15-S1-P121
2172014 2014Masquelier et al; licensee BioMed Central Ltd.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Fixational eye movements are common across vertebrates, yet their functional roles, if any, are debated 1 . To investigate this issue, we exposed the Virtual Retina simulator 2 to natural images, generated realistic drifts and microsaccades using the model of ref. 3 , and analyzed the output spike trains of the parvocellular retinal ganglion cells (RGC).

We first computed cross-correlograms between pairs of RGC that are strongly excited by the image corresponding to the mean eye position. Not surprisingly, in the absence of eye movements, that is when analyzing the tonic (sustained) response to a static image, these cross-correlograms are flat. Adding some slow drift (~20 min/s, self-avoiding random walk) creates long timescale (>1s) correlations because both cells tend to have high firing rates for central positions. Adding microsaccades (~0.5° in 25ms, that is ~20°/s) creates short timescale (tens of ms) correlations: cells that are strongly excited at a particular landing location tend to spike synchronously shortly after the landing.

What do the patterns of synchronous spikes represent? To investigate this issue, we fed the RGC spike trains to neurons equipped with spike timing-dependent plasticity (STDP) and lateral inhibitory connections, as in ref. 4 . Neurons self-organized, and each one selected a set of afferents that consistently fired synchronously. We then reconstructed the corresponding visual stimuli by convolving the synaptic weight matrices with the RGC receptive fields. In most cases, we could easily recognize what was learned (e.g. a face), and the neuron was selective (e.g. only responded for microsaccades that landed on a face). Without eye movements, or with only the drift, the STDP-based learning failed, because it needs correlations at a timescale roughly matching the STDP time constants 5 .

Microsaccades are thus necessary to generate a synchrony-based coding scheme. More specifically, after each microsaccade landing, cells that are strongly excited by the image corresponding to the landing location tend to fire their first spikes synchronously. Patterns of synchronous spikes can be decoded rapidly – as soon as the first spikes are received – by downstream “coincidence detector” neurons, which do not need to know the landing times. Finally, the required connectivity to do so can spontaneously emerge with STDP. As a whole, these results suggest a new role for microsaccades – to enable efficient visual feature learning and detection thanks to synchronization – that differs from other proposals such as time-to-first spike coding with respect to microsaccade landing times.

Acknowledgements

We thank A. Wohrer for having developed the Virtual Retina simulator and for the quality of his support, as well as M. Gilson for insightful discussions. The research received (partial) financial support from the 7th Framework Program for Research of the European Commission, under Grant agreement no 600847: RENVISION project of the Future and Emerging Technologies (FET) program (Neuro-bio-inspired systems (NBIS) FET-Proactive Initiative).

<p>The impact of microsaccades on vision: towards a unified theory of saccadic function</p>Martinez-CondeSOtero-MillanJMacknikSLNat Rev Neurosci201314839623329159<p>Virtual Retina: a biological retina model and simulator, with contrast gain control</p>WohrerAKornprobstPJ Comput Neurosci20092621924910.1007/s10827-008-0108-418670870<p>An integrated model of fixational eye movements and microsaccades</p>EngbertRMergenthalerKSinnPPikovskyAProc Natl Acad Sci U S A2011108E7657010.1073/pnas.1102730108318269521873243<p>Competitive STDP-Based Spike Pattern Learning</p>MasquelierTGuyonneauRThorpeSJNeural Comput2009211259127610.1162/neco.2008.06-08-80419718815<p>STDP allows fast rate-modulated coding with Poisson-like spike trains</p>GilsonMMasquelierTHuguesEPLoS Comput Biol20117e100223110.1371/journal.pcbi.1002231320305622046113