Age-dependent saccadic models for predicting eye movements

Abstract : How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2-dimensional saliency maps, saccadic models predict not only what observers look at but also how they move their eyes to explore the scene. Here we demonstrate that saccadic models are a flexible framework that can be tailored to emulate the gaze patterns from childhood to adulthood. The proposed age-dependent saccadic model not only outputs human-like, i.e. age-specific visual scanpath, but also significantly outperforms other state-of-the-art saliency models.
Type de document :
Communication dans un congrès
IEEE International Conference on Image Processing - ICIP 2017, Sep 2017, Shanghai, China. 2017
Liste complète des métadonnées

https://hal.inria.fr/hal-01651151
Contributeur : Olivier Le Meur <>
Soumis le : mardi 28 novembre 2017 - 17:07:07
Dernière modification le : mercredi 16 mai 2018 - 11:24:14

Identifiants

  • HAL Id : hal-01651151, version 1

Citation

Olivier Le Meur, Antoine Coutrot, Adrien Le Roch, Andrea Helo, Pia Rämä, et al.. Age-dependent saccadic models for predicting eye movements. IEEE International Conference on Image Processing - ICIP 2017, Sep 2017, Shanghai, China. 2017. 〈hal-01651151〉

Partager

Métriques

Consultations de la notice

307