A. Adadi and M. Berrada, Peeking inside the black-box: A survey on explainable artificial intelligence (xai), IEEE Access, vol.6, pp.52138-52160, 2018.

D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, and K. Hansen, How to explain individual classification decisions, p.29

J. A. Buolamwini, Gender shades: intersectional phenotypic and demographic evaluation of face datasets and gender classifiers, 2017.

M. Craven and J. W. Shavlik, Extracting tree-structured representations of trained networks, vol.7

A. Datta, S. Sen, and Y. Zick, Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems, 2016.

A. Dhurandhar, V. Iyengar, R. Luss, and K. Shanmugam, A formal framework to characterize interpretability of procedures, 2017.

J. H. Friedman, Greedy function approximation: A gradient boosting machine, The Annals of Statistics, vol.29, issue.5, pp.1189-1232, 2001.

A. Goldstein, A. Kapelner, J. Bleich, and E. Pitkin, Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation, 2013.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti et al., A survey of methods for explaining black box models, ACM Computing Surveys (CSUR), vol.51, issue.5, p.93, 2018.

W. Guo, D. Mu, J. Xu, P. Su, G. Wang et al., LEMNA: Explaining Deep Learning based Security Applications, Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security -CCS '18, pp.364-379, 2018.

A. Henelius, K. Puolamã?ki, H. Bostrã?m, L. Asker, and P. Papapetrou, A peek into the black box: exploring classifiers by randomization, Data Mining and Knowledge Discovery, vol.28, issue.5â??6, pp.1503-1529, 2014.

C. Henin and D. L. Mã?tayer, Towards a generic framework for black-box explanations of algorithmic decision systems (Extended Version), Inria Research Report, vol.9276

G. Hooker, Discovering additive structure in black box functions, Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining -KDD â??04, p.575, 2004.

J. Krause, A. Perer, and K. Ng, Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems -CHI '16, pp.5686-5697, 2016.

. Inria,

H. Lakkaraju, E. Kamar, R. Caruana, and J. Leskovec, Interpretable & Explorable Approximations of Black Box Models, 2017.

Z. C. Lipton, The mythos of model interpretability, 2016.

S. Lundberg and S. Lee, A Unified Approach to Interpreting Model Predictions, 2017.

P. Madumal, T. Miller, F. Vetere, and L. Sonenberg, Towards a grounded dialog model for explainable artificial intelligence, 2018.

T. Miller, P. Howe, and L. Sonenberg, Explainable AI: beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences, 2017.

B. D. Mittelstadt, C. Russell, and S. Wachter, Explaining explanations in AI. CoRR, 2018.

G. Ras, M. Van-gerven, and P. Haselager, Explanation methods in deep learning: Users, values, concerns and challenges, 2018.

M. T. Ribeiro, S. Singh, and C. Guestrin, Why Should I Trust You?": Explaining the Predictions of Any Classifier, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining -KDD '16, pp.1135-1144, 2016.

M. T. Ribeiro, S. Singh, and C. Guestrin, Anchors: High-precision model-agnostic explanations, AAAI Conference on Artificial Intelligence, 2018.

A. Richardson and A. Rosenfeld, A survey of interpretability and explainability in humanagent systems, vol.7

M. Robnik-Å?ikonja and I. Kononenko, Explaining classifications for individual instances, IEEE Transactions on Knowledge and Data Engineering, vol.20, issue.5, pp.589-600, 2008.

R. Tomsett, D. Braines, D. Harborne, A. D. Preece, and S. Chakraborty, Interpretable to whom? a role-based model for analyzing interpretable machine learning systems, CoRR, 2018.

E. Ventocilla, T. Helldin, M. Riveiro, J. Bae, and N. Lavesson, Towards a taxonomy for interpretable and interactive machine learning, 2018.

S. Wachter, B. Mittelstadt, and C. Russell, Counterfactual explanations without opening the black box: Automated decisions and the gdpr, SSRN Electronic Journal, 2017.

A. Weller, Challenges for transparency, 2017.

E. Å?trumbelj and I. Kononenko, Explaining prediction models and individual predictions with feature contributions, Knowledge and Information Systems, vol.41, issue.3, pp.647-665, 2014.