G. Boccignone and M. Ferraro, Modelling gaze shift as a constrained random walk, Physica A: Statistical Mechanics and its Applications, vol.331, pp.207-218, 2004.

G. Boccignone and M. Ferraro, Modelling eye-movement control via a constrained search approach, EUVIP, pp.235-240, 2011.

A. Borji and L. Itti, State-of-the-art in Visual Attention Modeling, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.35, pp.185-207, 2013.

A. Borji and L. Itti, CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research. CVPR 2015 workshop on, 2015.

K. Breeden and P. Hanrahan, Gaze data for the analysis of attention in feature films, ACM Transactions on Applied Perception (TAP), vol.14, p.23, 2017.

D. B. Bruce and J. K. Tsotsos, Saliency, attention and visual search: an information theoretic approach, Journal of Vision, vol.9, pp.1-24, 2009.

Z. Che, A. Borji, G. Zhai, and X. Min, Invariance analysis of saliency models versus human gaze during scene free viewing, 2018.

M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, Multi-level Net: A Visual Saliency Prediction Model, European Conference on Computer Vision, pp.302-315, 2016.

M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, Predicting human eye fixations via an LSTM-based saliency attentive model, 2016.

A. Coutrot, H. Janet, A. B. Hsiao, and . Chan, Scanpath modeling and classification with hidden Markov models, Behavior research methods, vol.50, pp.362-379, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02348515

J. Deng, W. Dong, R. Socher, L. Li, K. Li et al., Imagenet: A large-scale hierarchical image database, 2009 IEEE conference on computer vision and pattern recognition, pp.248-255, 2009.

B. Follet, O. L. Meur, and T. Baccino, New insights into ambient and focal visual fixations using an automatic classification algorithm. i-Perception, vol.2, pp.592-610, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00746032

L. Itti, C. Koch, and E. Niebur, A model for saliency-based visual attention for rapid scene analysis, IEEE Trans. on PAMI, vol.20, pp.1254-1259, 1998.

K. Judd, F. Ehinger, A. Durand, and . Torralba, Predicting eye fixations on complex visual stimuli using local symmetry, Cognitive Computation, vol.3, pp.223-240, 2009.

K. Krejtz, A. Duchowski, and A. Coltekin, High-Level Gaze Metrics From Map Viewing: Charting Ambient/Focal Visual Attention, the 2nd International Workshop on Eye Tracking for Spatial Research, 2014.

M. Kümmerer, L. Theis, and M. Bethge, Deep gaze i: Boosting saliency prediction with feature maps trained on imagenet, 2014.

M. Kümmerer, S. A. Thomas, M. Wallis, and . Bethge, DeepGaze II: Reading fixations from deep features trained on object recognition, 2016.

L. Meur and T. Baccino, Methods for comparing scanpaths and saliency maps: strengths and weaknesses, Behavior Research Method, vol.45, pp.251-266, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00757615

L. Olivier, A. Meur, and . Coutrot, Introducing context-dependent and spatially-variant viewing biases in saccadic models, Vision research, vol.121, pp.72-84, 2016.

O. L. Meur, P. L. Callet, D. Barba, and D. Thoreau, A coherent computational approach to model the bottom-up visual attention, IEEE Trans. On PAMI, vol.28, pp.802-817, 2006.
URL : https://hal.archives-ouvertes.fr/hal-00669578

L. Olivier, Z. Meur, and . Liu, Saccadic model of eye movements for free-viewing condition, Vision research, vol.1, pp.1-13, 2015.

J. Pan, C. Canton, K. Mcguinness, E. Noel, J. O'connor et al., Salgan: Visual saliency prediction with generative adversarial networks, 2017.

J. Pan, E. Sayrol, X. Giro-i-nieto, K. Mcguinness, and N. Connor, Shallow and deep convolutional networks for saliency prediction, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.598-606, 2016.

S. Pannasch, J. Schulz, and B. M. Velichkovsky, On the control of visual fixation durations in free viewing of complex images, Perception, & Psychophysics, vol.73, pp.1120-1132, 2011.

N. Riche, M. Duvinage, M. Mancas, B. Gosselin, and T. Dutoit, Saliency and human fixations: state-of-the-art and study of comparison metrics, Proceedings of the IEEE international conference on computer vision, pp.1153-1160, 2013.

D. Dario, J. Salvucci, and . Goldberg, Identifying fixations and saccades in eye-tracking protocols, Proceedings of the 2000 symposium on Eye tracking research & applications, pp.71-78, 2000.

C. Shen and Q. Zhao, Webpage Saliency, 2014.

K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, 2014.

J. Tim and . Smith, Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes, Journal of vision, vol.13, pp.16-16, 2013.

. Colwyn-b-trevarthen, Two mechanisms of vision in primates, Psychologische Forschung, vol.31, pp.299-337, 1968.

J. A. Pieter, S. Unema, M. Pannasch, B. M. Joos, and . Velichkovsky, Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration, Visual cognition, vol.12, pp.473-494, 2005.

M. Boris, M. Velichkovsky, J. R. Joos, S. Helmert, and . Pannasch, Two visual systems and their eye movements: Evidence from static and dynamic scene perception, Proceedings of the XXVII conference of the cognitive science society. Citeseer, pp.2283-2288, 2005.

S. David and . Wooding, Fixation maps: quantifying eye-movement traces, Proceedings of the 2002 symposium on Eye tracking research & applications. ACM, pp.31-36, 2002.

G. Wyszecki and W. S. Stiles, Color science, vol.8, 1982.