Z. C. Lipton, The Mythos of Interpretability, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016.

T. Seiller, Why Complexity Theorists Should Care About Philosophy, ANR-DFG "Beyond Logic" Conference, Cerisy-la-Salle, 2017.

C. Caldini, Google est-il antisémite ?

. Abdohalli, . Benoush, . Nasraoui, and . Olfa, Explainable Restricted Boltzmann Machines for Collaborative Filtering, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016.

P. Européen, Règlement Général sur la Protection des Données, 2016.

A. Nationale and . Sénat, pour une République numérique, Journal Officiel de la République Française, p.235, 1321.

W. Knight, The Dark Secret at the Heart of IA, The MIT Technological Review, vol.120, issue.3, p.2017

B. Goodman and S. Flaxman, EU regulations on algorithmic decisionmaking and a "right to explanation, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016.
DOI : 10.1609/aimag.v38i3.2741

URL : http://arxiv.org/pdf/1606.08813

N. Condry, Meaningful Models : Utilizing Conceptual Structure to Improve Machine Learning Interpretability, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016.

. Hara, . Satoshi, and K. Hayashi, Making Tree Ensembles Interpretable, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016.

J. Krause, . Perer, . Adam, and E. Bertini, Using Visual Analytics to Interpret Predictive Machine Learning Models, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016.

M. Egele, . Scholte, . Theodoor, . Kirda, . Engin et al., A survey on automated dynamic malware-analysis techniques and tools, ACM Computing Surveys, vol.44, issue.2, p.2012
DOI : 10.1145/2089125.2089126

URL : http://www.iseclab.org/papers/malware_survey.pdf

. Dhurandhar, . Amit, . Iyengar, . Vijay, . Luss et al., A Formal Framework to Characterize Interpretability of Procedures, Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning, 2017.

A. Weller, Challenges for Transparency, Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning, 2017.

F. Doshi-velez and B. Kim, Towards A Rigorous Science of Interpretable Machine Learning. ArXiv e-prints, 2017.

M. Tulio and R. Marco, Sameer Singh ; Carlos Guestrin. Introduction to Local Interpretable Model-Agnostic Explanations (LIME), 2016.

M. Tulli-ribeiro, S. Singh, and C. Guestrin, Why should I trust you" Explaining the predictions of any classifier, 2016.

L. Wehenkel, On uncertainty measures used for decision tree induction, IPMU-96, Information Processing and Management of Uncertainty in Knowledge-Based Systems, p.6, 1996.

R. Senge, S. Bösner, K. Dembczynski, J. Haasenritter, O. Hirsch et al., Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty, Information Sciences, pp.16-29, 2014.
DOI : 10.1016/j.ins.2013.07.030

URL : http://www.mathematik.uni-marburg.de/~eyke/publications/reliable-classification.pdf

H. Wang and D. Yeung, Towards Bayesian Deep Learning : A Survey. ArXiv e-prints, 2016.

A. Kendall and Y. Gal, What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision ? ArXiv e-prints, 2017.

Y. Gal and Z. Ghahramani, Dropout as a Bayesian Approximation : Representing Model Uncertainty in Deep Learning ArXiv e-prints, 2015.

J. Vincent, Google 'fixed' its racist algorithm by removing gorillas from its image-labeling tech, 2018.