S. Amershi, M. Chickering, S. M. Drucker, B. Lee, P. Simard et al., Model-Tracker: Redesigning Performance Analysis Tools for Machine Learning, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp.337-346, 2015.

M. Ankerst, C. Elsen, M. Ester, and H. P. Kriegel, Visual classification: an interactive approach to decision tree construction, Proceedings of KDD '99, pp.392-396, 1999.

A. Bechara, H. Damasio, A. R. Damasio, and G. P. Lee, Different contributions of the human amygdala and ventromedial prefrontal cortex to decision-making, Journal of Neuroscience, vol.19, pp.5473-5481, 1999.

B. Becker, R. Kohavi, and D. Sommerfield, Information visualization in data mining and knowledge discovery, pp.237-249, 2002.

O. Biran and C. Cotton, Explanation and justification in machine learning: A survey, Proceedings of the 2017 IJCAI Explainable AI Workshop, pp.8-13, 2017.

M. Brahimi, M. Arsenovic, S. Laraba, S. Sladojevic, K. Boukhalfa et al., Deep Learning for Plant Diseases: Detection and Saliency Map Visualisation, pp.93-117, 2018.

P. B. Brandtzaeg and A. Følstad, Trust and distrust in online fact-checking services, Communications of ACM, vol.60, issue.9, pp.65-71, 2017.

C. Valdez, A. Ziefle, M. Verbert, K. Felfernig, A. Holzinger et al., Recommender systems for health informatics: State-of-the-art and future perspectives, Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, pp.391-414, 2016.

D. Caragea, D. Cook, and V. G. Honavar, Gaining insights into support vector machine pattern classifiers using projection-based tour methods, Proceedings of KDD '01, pp.251-256, 2001.

D. Chen, R. K. Bellamy, P. K. Malkin, and T. Erickson, Diagnostic visualization for non-expert machine learning practitioners: A design study, 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp.87-95, 2016.

B. Figner and R. O. Murphy, Using skin conductance in judgment and decision making research. In: A handbook of process tracing methods for decision research: A critical review and user's guide, pp.163-184, 2010.

D. Fisher, R. Deline, M. Czerwinski, and S. Drucker, Interactions with Big Data Analytics. Interactions, vol.19, issue.3, pp.50-59, 2012.

Z. Guo, M. O. Ward, and E. A. Rundensteiner, Nugget Browser: Visual Subgroup Mining and Statistical Significance Discovery in Multivariate Datasets, Proceedings of the 15th International Conference on Information Visualisation, pp.267-275, 2011.

P. Hartono, A transparent cancer classifier, Health Informatics Journal, 2018.

A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, Black-box adversarial attacks with limited queries and information, Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol.80, pp.10-15, 2018.

J. Zhou, Z. Syed, S. Arshad, K. Luo, S. Yu et al., Indexing cognitive load using blood volume pulse features, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. CHI EA '17, 2017.

R. F. Kizilcec, How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp.2390-2395, 2016.

P. W. Koh and P. Liang, Understanding black-box predictions via influence functions, Proceedings of the 34th International Conference on Machine Learning, pp.6-11, 2017.

T. Kriplean, C. Bonnar, A. Borning, B. Kinney, and B. Gill, Integrating on-demand fact-checking with public dialogue, Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, p.14, 2014.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, vol.25, pp.1097-1105, 2012.

W. Landecker, M. D. Thomure, L. M. Bettencourt, M. Mitchell, G. T. Kenyon et al., Interpreting individual classifications of hierarchical networks, 2013 IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp.32-38, 2013.

J. D. Lee and K. A. See, Trust in automation: Designing for appropriate reliance, Human Factors, vol.46, issue.1, pp.50-80, 2004.

Z. Li, B. Zhang, Y. Wang, F. Chen, R. Taib et al., Water Pipe Condition Assessment: A Hierarchical Beta Process Approach for Sparse Incident Data, Machine Learning, vol.95, issue.1, pp.11-26, 2014.

Z. C. Lipton, The mythos of model interpretability, Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI, 2016.

S. Luo, J. Zhou, H. B. Duh, and F. Chen, Bvp feature signal analysis for intelligent user interface, Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, p.17, 2017.

S. Mannarswamy and S. Roy, Evolving ai from research to real life -some challenges and suggestions, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp.5172-5179, 2018.

M. Nilsson and P. Funk, A case-based classification of respiratory sinus arrhythmia. Lecture Notes in Computer Science Advances in Case-Based Reasoning p, p.673685, 2004.

M. T. Ribeiro, S. Singh, and C. Guestrin, Why Should I Trust You?": Explaining the Predictions of Any Classifier, 2016.

A. Richardson and A. Rosenfeld, A survey of interpretability and explainability in human-agent systems, Proceedings of IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), pp.137-143, 2018.

M. Robnik-sikonja, I. Kononenko, and E. Strumbelj, Quality of Classification Explanations with PRBF, Neurocomput, vol.96, pp.37-46, 2012.

L. R. Ye and P. E. Johnson, The impact of explanation facilities on user acceptance of expert systems advice, MIS Quarterly, vol.19, issue.2, pp.157-172, 1995.

M. Yin, J. W. Vaughan, and H. Wallach, Does stated accuracy affect trust in machine learning algorithms?, Proceedings of ICML2018 Workshop on Human Interpretability in Machine Learning, 2018.

J. Zhai, A. Barreto, C. Chin, and C. Li, Realization of stress detection using psychophysiological signals for improvement of human-computer interactions, Proceedings of IEEE SoutheastCon, pp.415-420, 2005.

J. Zhou, S. Z. Arshad, X. Wang, Z. Li, D. Feng et al., End-user development for interactive data analytics: Uncertainty, correlation and user confidence, IEEE Transactions on Affective Computing, vol.9, issue.3, pp.383-395, 2018.

J. Zhou, C. Bridon, F. Chen, A. Khawaji, and Y. Wang, Be Informed and Be Involved: Effects of Uncertainty and Correlation on User Confidence in Decision Making, Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI2015) Works-in-Progress, 2015.

, Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent, 2018.

J. Zhou, M. A. Khawaja, Z. Li, J. Sun, Y. Wang et al., Making Machine Learning Useable by Revealing Internal States Update -A Transparent Approach, International Journal of Computational Science and Engineering, vol.13, issue.4, pp.378-389, 2016.

J. Zhou, Z. Li, W. Zhi, B. Liang, D. Moses et al., Using convolutional neural networks and transfer learning for bone age classification, 2017 International Conference on Digital Image Computing: Techniques and Applications, pp.1-6, 2017.

J. Zhou, J. Sun, F. Chen, Y. Wang, R. Taib et al., Measurable Decision Making with GSR and Pupillary Analysis for Intelligent User Interface, ACM Transactions on Computer-Human Interaction, vol.21, issue.6, p.33, 2015.

J. Zhou, J. Sun, Y. Wang, and F. Chen, Wrapping practical problems into a machine learning framework: Using water pipe failure prediction as a case study, International Journal of Intelligent Systems Technologies and Applications, vol.16, issue.3, pp.191-207, 2017.