M. T. Mercadante, E. C. Macedo, P. M. Baptista, C. S. Paula, and J. S. Schwartzman, Saccadic movements using eyetracking technology in individuals with autism spectrum disorders: pilot study, Arquivos de neuro-psiquiatria, vol.64, issue.3A, pp.559-562, 2006.

E. Thorup, P. Nyström, G. Gredebäck, S. Bölte, and T. Falckytter, Altered gaze following during live interaction in infants at risk for autism: an eye tracking study, Molecular autism, vol.7, issue.1, p.12, 2016.

G. Dawson, Understanding the nature of face processing impairment in autism: insights from behavioral and electrophysiological studies, Developmental neuropsychology, vol.27, issue.3, pp.403-424, 2005.

M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, Predicting human eye fixations via an lstm-based saliency attentive model, IEEE Transactions on Image Processing, vol.27, issue.10, pp.5142-5154, 2018.

J. Pan, C. C. Ferrer, K. Mcguinness, N. E. O'connor, J. Torres et al., Salgan: Visual saliency prediction with generative adversarial networks, 2017.

N. Riche, M. Mancas, M. Duvinage, M. Mibulumukini, B. Gosselin et al., Rare2012: A multi-scale raritybased saliency detection with its comparative statistical analysis, Signal Processing: Image Communication, vol.28, issue.6, pp.642-658, 2013.

L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. , Sun: A bayesian framework for saliency using natural statistics, Journal of vision, vol.8, issue.7, pp.32-32, 2008.

A. Garcia-diaz, X. R. Fdez-vidal, X. M. Pardo, and R. Dosil, Saliency from hierarchical adaptation through decorrelation and variance normalization, Image and Vision Computing, vol.30, issue.1, pp.51-64, 2012.

O. L. Meur, A. Coutrot, Z. Liu, P. Rämä, A. L. Roch et al., Visual attention saccadic models learn to emulate gaze patterns from childhood to adulthood, IEEE Transactions on Image Processing, vol.26, issue.10, pp.4777-4789, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01650322

O. Krishna, A. Helo, P. Rämä, and K. Aizawa, Gaze distribution analysis and saliency prediction across age groups, PloS one, vol.13, issue.2, p.193149, 2018.

M. Jiang and Q. Zhao, Learning visual attention to identify people with autism spectrum disorder, Proceedings of the IEEE International Conference on Computer Vision, pp.3267-3276, 2017.

H. Duan, G. Zhai, X. Min, Y. Fang, Z. Che et al., Learning to predict where the children with asd look, 2018 25th IEEE International Conference on Image Processing (ICIP), pp.704-708, 2018.

S. Fan, Z. Shen, M. Jiang, B. Koenig, J. Xu et al., Emotional attention: A study of image sentiment and visual attention, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.7521-7531, 2018.

M. Kümmerer, T. Wallis, and M. Bethge, Deepgaze ii: Reading fixations from deep features trained on object recognition, 2016.

M. Cornia, L. Baraldi, G. Serra, and R. C. , A Deep MultiLevel Network for Saliency Prediction, International Conference on Pattern Recognition (ICPR), 2016.

K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, 2014.

H. Duan, G. Zhai, X. Min, Z. Che, Y. Fang et al., A dataset of eye movements for the children with autism spectrum disorder, ACM Multimedia Systems Conference (MMSys'19), 2019.

T. Judd, F. Durand, and A. Torralba, Fixations on lowresolution images, Journal of Vision, vol.11, issue.4, pp.14-14, 2011.

Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand et al., Mit saliency benchmark, 2015.

O. , L. Meur, and T. Baccino, Methods for comparing scanpaths and saliency maps: strengths and weaknesses, Behavior Research Method, vol.45, issue.1, pp.251-266, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00757615

X. Hou, J. Harel, and C. Koch, Image signature: Highlighting sparse salient regions, IEEE transactions on pattern analysis and machine intelligence, vol.34, pp.194-201, 2012.

J. Harel, C. Koch, and P. Perona, Graph-based visual saliency, Advances in neural information processing systems, pp.545-552, 2007.

A. Alink and I. Charest, Individuals with clinically relevant autistic traits tend to have an eye for detail, 2018.