D. Akkil, P. Dey, D. Salian, N. Rajput, R. Bernhaupt et al., Gaze awareness in agent-based early-childhood learning application, pp.447-466, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01678416

D. Akkil, B. Thankachan, and P. Isokoski, I see what you see: Gaze awareness in mobile video collaboration, Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. pp. 32:1-32:9. ETRA '18, 2018.

S. Antifakos, A. Schwaninger, and B. Schiele, Evaluating the effects of displaying uncertainty in context-aware applications, UbiComp 2004: Ubiquitous Computing, pp.54-69, 2004.

M. Argyle and M. Cook, Gaze and mutual gaze, 1976.

R. Bednarik, S. Eivazi, and H. Vrzakova, A Computational Approach for Prediction of Problem-Solving Behavior Using Support Vector Machines and Eye-Tracking Data, pp.111-134, 2013.

O. Biran and C. Cotton, Explanation and justification in machine learning: A survey, IJCAI-17 Workshop on Explainable AI (XAI), p.8, 2017.

J. Brewer, S. D'angelo, and D. Gergle, Iris: Gaze visualization design made easy, Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. pp. D504:1-D504:4. CHI EA '18, 2018.

J. H. Brockmyer, C. M. Fox, K. A. Curtiss, E. Mcbroom, K. M. Burkhart et al., The development of the game engagement questionnaire: A measure of engagement in video game-playing, Journal of Experimental Social Psychology, vol.45, issue.4, pp.624-634, 2009.

R. Buettner, Cognitive workload of humans using artificial intelligence systems: Towards objective measurement applying eye-tracking technology, KI 2013: Advances in Artificial Intelligence, pp.37-48, 2013.

J. Chen and M. Barnes, Human-agent teaming for multirobot control: A review of human factors issues, IEEE Transactions on Human-Machine Systems, vol.44, issue.1, pp.13-29, 2014.

J. Y. Chen, S. G. Lakhmani, K. Stowers, A. R. Selkowitz, J. L. Wright et al., Situation awareness-based agent transparency and human-autonomy teaming effectiveness, Theoretical Issues in Ergonomics Science, vol.19, issue.3, pp.259-282, 2018.

,

S. D'angelo and A. Begel, Improving communication between pair programmers using shared gaze awareness, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. pp. 6245-6290. CHI '17, 2017.

S. D'angelo, J. Brewer, and D. Gergle, Iris: A tool for designing contextually relevant gaze visualizations, Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. pp. 79:1-79:5. ETRA '19, 2019.

,

S. D'angelo and D. Gergle, Gazed and confused: Understanding and designing shared gaze for remote collaboration, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp.2492-2496, 2016.

S. D'angelo and D. Gergle, An eye for design: Gaze visualizations for remote collaborative work, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. pp. 349:1-349:12. CHI '18, 2018.

J. Dodge, S. Penney, C. Hilderbrand, A. Anderson, and M. Burnett, How the experts do it: Assessing and explaining agent behaviors in real-time strategy games, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, vol.562, 2018.

A. T. Duchowski, Gaze-based interaction: A 30 year retrospective, Computers & Graphics, vol.73, pp.59-69, 2018.

A. T. Duchowski, K. Krejtz, I. Krejtz, C. Biele, A. Niedzielska et al., The index of pupillary activity: Measuring cognitive load vis-à-vis task difficulty with pupil oscillation, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. pp. 282:1-282:13. CHI '18, 2018.

M. Eiband, H. Schneider, M. Bilandzic, J. Fazekas-con, M. Haug et al., Bringing transparency design into practice, 23rd International Conference on Intelligent User Interfaces, pp.211-223, 2018.

M. R. Endsley, Toward a Theory of Situation Awareness in Dynamic Systems, Human Factors: The Journal of the Human Factors and Ergonomics Society, vol.37, issue.1, pp.32-64, 1995.

M. Harbers, . Van-den, K. Bosch, and J. J. Meyer, A study into preferred explanations of virtual agent behavior, Intelligent Virtual Agents, pp.132-145, 2009.

S. G. Hart and L. E. Staveland, Development of nasa-tlx (task load index): Results of empirical and theoretical research, Advances in Psychology, vol.52, pp.62386-62395, 1988.

B. Hayes and J. A. Shah, Improving robot controller transparency through autonomous policy explanation, Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp.303-312, 2017.

,

K. Higuch, R. Yonetani, and Y. Sato, Can eye help you?: Effects of visualizing eye fixations on remote collaboration scenarios for physical tasks, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. pp. 5180-5190. CHI '16, 2016.

R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, Metrics for explainable ai: Challenges and prospects, 2018.

C. M. Huang, S. Andrist, A. Sauppé, and B. Mutlu, Using gaze patterns to predict task intent in collaboration, Frontiers in Psychology, vol.6, p.1049, 2015.

,

C. M. Huang and B. Mutlu, Anticipatory robot control for efficient human-robot collaboration, The Eleventh ACM/IEEE International Conference on Human Robot Interaction. pp. 83-90. HRI '16, 2016.

R. J. Jacob, What you look at is what you get: Eye movement-based interaction techniques, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp.11-18, 1990.

D. Kahneman and P. Egan, Thinking, fast and slow, vol.1, 2011.

R. Kass and T. Finin, The need for user models in generating expert system explanations, International Journal of Expert Systems, vol.1, issue.4, 1988.

F. C. Keil, Explanation and understanding, Annual Review of Psychology, vol.57, issue.1, pp.227-254, 2006.

J. F. Kelley, An iterative design methodology for user-friendly natural language office information applications, ACM Trans. Inf. Syst, vol.2, issue.1, pp.26-41, 1984.

G. Klien, D. D. Woods, J. M. Bradshaw, R. R. Hoffman, and P. J. Feltovich, Ten challenges for making automation a "team player" in joint human-agent activity, IEEE Intelligent Systems, vol.19, issue.6, pp.91-95, 2004.

M. F. Land, Vision, eye movements, and natural behavior, Visual Neuroscience, vol.26, issue.1, pp.51-62, 2009.

P. Langley, B. Meadows, M. Sridharan, and D. Choi, Explainable agency for intelligent autonomous systems, Twenty-Ninth IAAI Conference, 2017.

M. Lankes, B. Maurer, and B. Stiglbauer, An eye for an eye: Gaze input in competitive online games and its effects on social presence, Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology. pp. 17:1-17:9. ACE2016, ACM, 2016.

N. Lesh, J. Marks, C. Rich, and C. L. Sidner, Man-computer symbiosis revisited: Achieving natural communication and collaboration with computers, IEICE Transactions, vol.87, issue.6, pp.1290-1298, 2004.

J. C. Licklider, Man-computer symbiosis, IRE Transactions on Human Factors in Electronics HFE-1, issue.1, pp.4-11, 1960.

B. Y. Lim and A. K. Dey, Investigating intelligibility for uncertain context-aware applications, Proceedings of the 13th International Conference on Ubiquitous Computing, pp.415-424

, UbiComp '11, 2011.

P. Madumal, AAMAS '19, International Foundation for Autonomous Agents and Multiagent Systems, Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp.2432-2434, 2019.

P. Madumal, T. Miller, L. Sonenberg, and F. Vetere, Explainable reinforcement learning through a causal lens, 2019.

P. Madumal, T. Miller, L. Sonenberg, and F. Vetere, AAMAS '19, International Foundation for Autonomous Agents and Multiagent Systems, Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp.1033-1041, 2019.

P. Majaranta and A. Bulling, Eye Tracking and Eye-Based Human-Computer Interaction, pp.39-65, 2014.

B. F. Malle and J. Knobe, Which behaviors do people explain? a basic actor-observer asymmetry, Journal of Personality and Social Psychology, vol.72, issue.2, p.288, 1997.

B. Maurer, M. Lankes, B. Stiglbauer, and M. Tscheligi, EyeCo: Effects of Shared Gaze on Social Presence in an Online Cooperative Game, pp.102-114, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01640281

T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence, vol.267, pp.1-38, 2019.

J. D. Moore and C. L. Paris, Requirements for an expert system explanation facility, Computational Intelligence, vol.7, issue.4, pp.367-370, 1991.

V. Narayanan, Y. Zhang, N. Mendoza, and S. Kambhampati, Automated planning for peer-to-peer teaming and its evaluation in remote human-robot interaction, Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, pp.161-162, 2015.

J. Newn, F. Allison, E. Velloso, and F. Vetere, Looks can be deceiving: Using gaze visualisation to predict and mislead opponents in strategic gameplay, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018.

J. Newn, E. Velloso, F. Allison, Y. Abdelrahman, and F. Vetere, Evaluating real-time gaze representations to infer intentions in competitive turn-based strategy games, Proceedings of the Annual Symposium on Computer-Human Interaction in Play, pp.541-552, 2017.

I. Nunes and D. Jannach, A systematic review and taxonomy of explanations in decision support and recommender systems, User Modeling and User-Adapted Interaction, vol.27, issue.3-5, pp.393-444, 2017.

P. Qvarfordt and S. Zhai, Conversing with the user based on eye-gaze patterns, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp.221-230

, CHI '05, 2005.

L. D. Riek, Wizard of oz studies in hri: A systematic review and new reporting guidelines, J. Hum.-Robot Interact, vol.1, issue.1, pp.119-136, 2012.

V. Rieser, Bootstrapping reinforcement learning-based dialogue strategies from wizard-of-oz data, 2008.

R. Singh, T. Miller, J. Newn, L. Sonenberg, E. Velloso et al., Combining planning with gaze for online human intention recognition, Proceedings of the 17th International Conference on Autonomous Agents and Multiagent System. AAMAS '18, International Foundation for Autonomous Agents and Multiagent Systems, 2018.

R. Stein and S. E. Brennan, Another person's eye gaze as a cue in solving programming problems, Proceedings of the 6th International Conference on Multimodal Interfaces, pp.9-15, 2004.

V. V. Unhelkar, P. A. Lasota, Q. Tyroller, R. Buhai, L. Marceau et al., Human-aware robotic assistant for collaborative assembly: Integrating human motion prediction with planning in time, IEEE Robotics and Automation Letters, vol.3, issue.3, pp.2394-2401, 2018.

E. Velloso and M. Carter, The emergence of eyeplay: A survey of eye interaction in games, Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. pp. 171-185. CHI PLAY '16, 2016.

,

R. Vertegaal, The gaze groupware system: Mediating joint attention in multiparty communication and collaboration, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp.294-301, 1999.

D. Wang, Q. Yang, A. Abdul, and B. Y. Lim, Designing theory-driven user-centric explainable ai, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. pp. 601:1-601:15. CHI '19, 2019.

N. Wang, D. V. Pynadath, and S. G. Hill, Trust calibration within a human-robot team: Comparing automatically generated explanations, The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp.109-116, 2016.

Y. Zhang, K. Pfeuffer, M. K. Chong, J. Alexander, A. Bulling et al., Look together: using gaze for assisting co-located collaborative search, Personal and Ubiquitous Computing, vol.21, issue.1, pp.173-186, 2017.