R. Kaliouby and P. Robinson, Real-time inference of complex mental states from facial expressions and head gestures, " in Real-time vision for humancomputer interaction, pp.181-200, 2005.

T. Baltru?aitis, D. Mcduu, N. Banda, M. Mahmoud, R. Kaliouby et al., Real-time inference of mental states from facial expressions and upper body gestures, Face and Gesture 2011, pp.909-914, 2011.
DOI : 10.1109/FG.2011.5771372

T. Baltruå?aitis, P. Robinson, and L. P. Morency, Openface: An open source facial behavior analysis toolkit, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp.1-10, 2016.

Z. Cao, T. Simon, S. Wei, and Y. Sheikh, Realtime multi-person 2d pose estimation using part aanity elds, CVPR, 2017.

T. Simon, H. Joo, I. Matthews, and Y. Sheikh, Hand keypoint detection in single images using multiview bootstrapping, CVPR, 2017.

D. Kahneman, Thinking, fast and slow, 2011.

M. Poh, D. J. Mcduu, and R. W. Picard, Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam, IEEE Transactions on Biomedical Engineering, vol.58, issue.1, pp.7-11, 2011.
DOI : 10.1109/TBME.2010.2086456

J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio et al., Real-time human pose recognition in parts from single depth images, Communications of the ACM, vol.56, issue.1, pp.116-124, 2013.
DOI : 10.1145/2398356.2398381

R. Stiefelhagen, J. Yang, and A. Waibel, A Model-Based Gaze Tracking System, International Journal on Artificial Intelligence Tools, vol.06, issue.02, pp.193-209, 1997.
DOI : 10.1142/S0218213097000116

N. Charness, E. M. Reingold, M. Pomplun, and D. M. Stampe, The perceptual aspect of skilled performance in chess: Evidence from eye movements, Memory & Cognition, vol.5, issue.26, pp.1146-1152, 2001.
DOI : 10.1007/978-1-4899-5379-7

L. Paletta, A. Dini, C. Murko, S. Yahyanejad, M. Schwarz et al., Towards Real-time Probabilistic Evaluation of Situation Awareness from Human Gaze in Human-Robot Interaction, Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI '17, pp.247-248, 2017.
DOI : 10.1037/1076-898X.12.2.67

T. Giraud, M. Soury, J. Hua, A. Delaborde, M. Tahon et al., Multimodal Expressions of Stress during a Public Speaking Task: Collection, Annotation and Global Analyses, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp.417-422, 2013.
DOI : 10.1109/ACII.2013.75

URL : https://hal.archives-ouvertes.fr/hal-01443842

M. K. Abadi, J. Staiano, A. Cappelletti, M. Zancanaro, and N. Sebe, Multimodal engagement classiication for aaective cinema, AAective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on, pp.411-416, 2013.

E. M. Reingold and N. Charness, Perception in chess: Evidence from eye movements Cognitive processes in eye guidance, pp.325-354, 2005.

M. Portaz, M. Garcia, A. Barbulescu, A. Begault, L. Boissieux et al., Figurines, a multimodal framework for tangible storytelling Available: https, WOCCI 2017 -6th Workshop on Child Computer Interaction at ICMI 2017 -19th ACM International Conference on Multi-modal Interaction, 2017.

D. Vaufreydaz and A. Nègre, MobileRGBD, An Open Benchmark Corpus for mobile RGB-D Related Algorithms Available: https, 13th International Conference on Control, Automation, Robotics and Vision, 2014.

K. Holmqvist, M. Nyström, R. Andersson, R. Dewhurst, H. Jarodzka et al., Eye tracking: A comprehensive guide to methods and measures, OUP Oxford, 2011.

A. Poole and L. J. Ball, Eye Tracking in HCI and Usability Research, Encyclopedia of human computer interaction, pp.211-219, 2006.
DOI : 10.4018/978-1-59140-562-7.ch034

C. Ehmke and S. Wilson, Identifying web usability problems from eye-tracking data, Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI... but not as we know it, pp.119-128, 2007.

M. D. Uyl and H. Van-kuilenburg, The facereader: Online facial expression recognition, Proceedings of measuring behavior, pp.589-590, 2005.

O. Langner, R. Dotsch, G. Bijlstra, D. H. Wigboldus, S. T. Hawk et al., Presentation and validation of the Radboud Faces Database, Cognition & Emotion, vol.17, issue.8, pp.1377-1388, 2010.
DOI : 10.1007/BF00987006

E. Goeleven, R. De-raedt, L. Leyman, and B. Verschuere, The Karolinska Directed Emotional Faces: A validation study, Cognition & Emotion, vol.9, issue.6, pp.1094-1118, 2008.
DOI : 10.1016/S0165-0173(02)00248-5

G. Bijlstra and R. Dotsch, Facereader 4 emotion classiication performance on images from the radboud faces database, 2015.

S. M. Anzalone, S. Boucenna, S. Ivaldi, and M. Chetouani, Evaluating the Engagement with Social Robots, International Journal of Social Robotics, vol.35, issue.4, pp.465-478, 2015.
DOI : 10.1145/954339.954342

URL : https://hal.archives-ouvertes.fr/hal-01158293

J. A. Harrigan, Self-touching as an indicator of underlying affect and language processes, Social Science & Medicine, vol.20, issue.11, pp.1161-1168, 1985.
DOI : 10.1016/0277-9536(85)90193-5

W. Johal, D. Pellier, C. Adam, H. Fiorino, and S. Pesty, A cognitive and aaective architecture for social human-robot interaction, Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, pp.71-72, 2015.

J. Aigrain, M. Spodenkiewicz, S. Dubuisson, M. Detyniecki, D. Cohen et al., Multimodal stress detection from multiple assessments, IEEE Transactions on AAective Computing, 2016.
DOI : 10.1109/TAFFC.2016.2631594

URL : https://hal.archives-ouvertes.fr/hal-01416517