R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti et al., A survey of methods for explaining black box models, ACM Computing Surveys (CSUR), vol.51, issue.5, p.93, 2018.

T. Miller, P. Howe, and L. Sonenberg, Explainable AI: beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences

P. Madumal, T. Miller, F. Vetere, and L. Sonenberg,

B. D. Mittelstadt, C. Russell, and S. Wachter, Explaining explanations in AI

A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli, Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda, pp.1-18, 2018.

S. T. Mueller, R. R. Hoffman, W. Clancey, and A. Emrey, Explanation in human-ai systems: A literature meta-review synopsis of key ideas and publications and bibliography for explainable ai 204

V. Arya, R. K. Bellamy, P. Chen, A. Dhurandhar, M. Hind et al., One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques

C. Henin and D. L. Métayer, Towards a generic framework for black-box explanations of algorithmic decision systems (Extended Version), Inria Research Report, vol.9276

R. Tomsett, D. Braines, D. Harborne, A. D. Preece, and S. Chakraborty, Interpretable to whom? a role-based model for analyzing interpretable machine learning systems

A. B. Arrieta, N. Díaz-rodríguez, J. Ser, A. Bennetot, S. Tabik et al., Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai
URL : https://hal.archives-ouvertes.fr/hal-02381211

, The icon of Fig. 2 was made by turkkub from www.flaticon.com RR n°9331

T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence, vol.267

A. Weller, Challenges for transparency

Z. C. Lipton, The mythos of model interpretability

S. Wachter, B. Mittelstadt, and C. Russell, Counterfactual explanations without opening the black box: Automated decisions and the gdpr, vol.31, pp.841-887, 2018.

S. Stumpf, V. Rajaram, L. Li, M. Burnett, T. Dietterich et al., Toward harnessing user feedback for machine learning 10

B. Y. Lim, A. K. Dey, and D. Avrahami, Why and why not explanations improve the intelligibility of context-aware intelligent systems, Proceedings of the 27th international conference on Human factors in computing systems -CHI 09, p.2119, 2009.

T. Miller, P. Howe, L. Sonenberg, and A. I. Explainable, Beware of inmates running the asylum, in: IJCAI-17 Workshop on Explainable AI (XAI), vol.36, 2017.

H. Lakkaraju, E. Kamar, R. Caruana, and J. Leskovec, Interpretable & Explorable Approximations of Black Box Models

M. T. Ribeiro, S. Singh, and C. Guestrin, Anchors: High-precision model-agnostic explanations, AAAI Conference on Artificial Intelligence, 2018.

M. T. Ribeiro, S. Singh, and C. Guestrin, Why Should I Trust You?": Explaining the Predictions of Any Classifier, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining -KDD '16, pp.1135-1144, 2016.

E. ?trumbelj and I. Kononenko, Explaining prediction models and individual predictions with feature contributions, Knowledge and Information Systems, vol.41, issue.3, pp.647-665, 2014.

T. Laugel, M. Lesot, C. Marsala, X. Renard, and M. Detyniecki, The dangers of post-hoc interpretability: Unjustified counterfactual explanations
URL : https://hal.archives-ouvertes.fr/hal-02275308

T. Miller, Explanation in artificial intelligence: insights from the social sciences

F. Doshi-velez and B. Kim, Towards A Rigorous Science of Interpretable Machine Learning, 2017.

G. Ras, M. Van-gerven, and P. Haselager, Explanation methods in deep learning: Users, values, concerns and challenges

A. Adadi and M. Berrada, Peeking inside the black-box: A survey on explainable artificial intelligence (xai), IEEE Access, vol.6, pp.52138-52160, 2018.

C. T. Wolf, Explainability scenarios: towards scenario-based xai design, Proceedings of the 24th International Conference on Intelligent User Interfaces -IUI '19, pp.252-257, 2019.

F. Poursabzi-sangdeh, D. G. Goldstein, J. M. Hofman, J. W. Vaughan, and H. Wallach, Manipulating and measuring model interpretability

M. Hall, D. Harborne, R. Tomsett, V. Galetic, S. Quintana-amate et al., A systematic method to understand requirements for explainable ai (xai) systems 7

B. Kim, R. Khanna, and O. O. Koyejo, Examples are not enough, learn to criticize! criticism for interpretability, Advances in Neural Information Processing Systems, pp.2280-2288, 2016.

K. Sokol and P. Flach, One explanation does not fit all: The promise of interactive explanations for machine learning transparency

S. M. Lundberg and S. Lee, A Unified Approach to Interpreting Model Predictions, pp.4765-4774, 2017.

J. A. Fails and D. R. Olsen,

D. Walton, A dialogue system specification for explanation, Synthese, vol.182, issue.3, pp.349-374, 2011.

P. Madumal, T. Miller, L. Sonenberg, and F. Vetere, A grounded interaction protocol for explainable artificial intelligence

H. Nori, S. Jenkins, P. Koch, and R. Caruana, Interpretml: A unified framework for machine learning interpretability

J. Klaise, A. Van-looveren, G. Vacanti, and A. Coca, Alibi: Algorithms for monitoring and explaining machine learning models

P. Biecek, Dalex: Explainers for complex predictive models in r, Journal of Machine Learning Research, vol.19, issue.84, pp.1-5, 2018.

A. Dhurandhar, V. Iyengar, R. Luss, and K. Shanmugam, A formal framework to characterize interpretability of procedures

R. Guidotti, A. Monreale, S. Ruggieri, D. Pedreschi, F. Turini et al., Local rulebased explanations of black box decision systems

. Inria,