D. Becker, Machine learning explainability, 2019.

B. Braunschweig, Intelligence artificielle, les défis actuels et l'actions d'inria, 2016.

A. Burt, How the deployment of an explainable ai solution improves energy performance management at dalkia, 2017.

, How total direct energie applies explainable ai to its virtual assistant, 2019.

C. Denis and F. Varenne, Interpretability and explicability for machine learning: between descriptive models, predictive models and causal models. A necessary epistemological clarification, National (French) Conference on Artificial Intelligence (CNIA) -Artificial Intelligence Platform (PFIA), pp.60-68, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02184519

J. H. Friedman and B. E. Popescu, Predictive learning via rule ensembles, The Annals of Applied Statistics, vol.2, issue.3, pp.916-954, 2008.

. Google and . Google,

A. Guggiola, J. Schertzer, A. Hoff, C. Ledoux, S. Monnier et al., Ia explique toi ! Technical report, 2018.

D. Gunning, DARPA. The three stages of XAI, 2017.

J. P. Holdren and M. Smith, Executive Office of the President National Science and Technology Council Committee on Technology, 2016.

. Ibm, Ai explainability 360, 2019.

A. Karpathy, Convnetjs: Deep learning in your browser, 2014.

S. M. Lundberg, . Et-s.-i, and . Lee, A unified approach to interpreting model predictions, Advances in Neural Information Processing Systems, pp.4765-4774, 2017.

C. Mars, Explainable ai, a game changer for ai in production -ai night, 2019.

T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence, vol.267, pp.1-38, 2019.

J. R. Quinlan, C4.5: Programs for Machine Learning, 1993.

M. T. Ribeiro, S. Singh, and C. Guestrin, Model-agnostic interpretability of machine learning, 2016.

M. T. Ribeiro, S. Singh, and C. Guestrin, Why should i trust you?: Explaining the predictions of any classifier, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp.1135-1144, 2016.

A. Saabas, Interpreting random forests, 2014.

K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013.

D. S. Smilkov and . Carter,

H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister et al., Debugging sequence-to-sequence models with seq2seq-vis, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp.368-370, 2018.

C. Villani, Y. Bonnet, C. Berthet, F. Levin, M. Schoenauer et al., Donner un sens à l'intelligence artificielle: pour une stratégie nationale et européenne, 2018.

A. Weller, We believe these techniques make a true difference when it comes to deploying AIs, especially in the entreprise world, Challenges for transparency, 2017.

L. Abassi and I. Boukhris, A worker clustering-based approach of label aggregation under the belief function theory, Applied Intelligence, pp.1-10, 2018.

A. Ben-rjab, M. Kharoune, Z. Miklos, and M. Arnaud, Characterization of experts in crowdsourcing platforms, Belief Functions : Theory and Applications, p.9861, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01372142

A. P. Dempster, Upper and lower probabilities induced by a multivalued mapping, The Annals of Mathematical Statistics, vol.38, pp.325-339, 1967.

D. Koulougli, A. Hadjali, and E. I. Rassoul, Handling query answering in crowdsourcing systems : A belief function-based approach, Fuzzy Information Processing Society (NA-FIPS), 2016 Annual Conference of the North American, pp.1-6, 2016.

H. Ouni, A. Martin, L. Gros, M. Kharoune, and Z. Miklos, Une mesure d'expertise pour le crowdsourcing, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01432561

. Rnti--x,

, Interface de Crowdsourcing