S. Alfeld, X. Zhu, and P. Barford, Data poisoning attacks against autoregressive models, pp.1452-1458, 2016.

E. Alsuwat, H. Alsuwat, J. Rose, M. Valtorta, and C. Farkas, Long duration data poisoning attacks on Bayesian networks, 2019.

E. Alsuwat, H. Alsuwat, M. Valtorta, and C. Farkas, Cyber attacks against the pc learning algorithm, ECML PKDD 2018 Workshops, pp.159-176, 2019.

E. Alsuwat, M. Valtorta, and C. Farkas, Bayesian structure learning attacks, 2018.

E. Alsuwat, M. Valtorta, and C. Farkas, How to generate the network you want with the pc learning algorithm, Proceedings of the 11th Workshop on Uncertainty Processing (WUPES'18), pp.1-12, 2018.

M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, The security of machine learning, Machine Learning, vol.81, issue.2, pp.121-148, 2010.

M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, Can machine learning be secure?, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp.16-25, 2006.

B. Biggio, S. R. Bulò, I. Pillai, M. Mura, E. Z. Mequanint et al., Poisoning complete-linkage hierarchical clustering, Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pp.42-52, 2014.

B. Biggio, L. Didaci, G. Fumera, and F. Roli, Poisoning attacks to compromise face templates, Biometrics (ICB), 2013 International Conference on, pp.1-7, 2013.

B. Biggio, G. Fumera, F. Roli, and L. Didaci, Poisoning adaptive biometric systems, Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pp.417-425, 2012.

B. Biggio, B. Nelson, and P. Laskov, Poisoning attacks against support vector machines, Proceedings of the 29th International Coference on International Conference on Machine Learning, pp.1467-1474, 2012.

B. Biggio, I. Pillai, S. Rota-bulò, D. Ariu, M. Pelillo et al., Is data clustering in adversarial settings secure, Proceedings of the 2013 ACM workshop on Artificial intelligence and security, pp.87-98, 2013.

N. Carlini and D. Wagner, Adversarial examples are not easily detected: Bypassing ten detection methods, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp.3-14, 2017.

P. P. Chan, Z. M. He, H. Li, and C. C. Hsu, Data sanitization against adversarial label contamination based on data complexity, International Journal of Machine Learning and Cybernetics, vol.9, issue.6, pp.1039-1052, 2018.

R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner, Detecting adversarial samples from artifacts, 2017.

J. Gardiner and S. Nagaraja, On the security of machine learning in malware c&c detection: A survey, ACM Computing Surveys (CSUR), vol.49, issue.3, p.59, 2016.

I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples, 2014.

L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar, Adversarial machine learning, Proceedings of the 4th ACM workshop on Security and artificial intelligence, pp.43-58, 2011.

M. De-jongh and M. J. Druzdzel, A comparison of structural distance measures for causal bayesian network models. Recent Advances in Intelligent Information Systems, Challenging Problems of Science, Computer Science series pp, pp.443-456, 2009.

A. Kantchelian, J. Tygar, and A. Joseph, Evasion and hardening of tree ensemble classifiers, International Conference on Machine Learning, pp.2387-2396, 2016.

P. W. Koh and P. Liang, Understanding black-box predictions via influence functions, International Conference on Machine Learning, pp.1885-1894, 2017.

P. Laskov, Practical evasion of a learning-based classifier: A case study, Security and Privacy (SP), 2014 IEEE Symposium on, pp.197-211, 2014.

S. L. Lauritzen and D. J. Spiegelhalter, Local computations with probabilities on graphical structures and their application to expert systems, Journal of the Royal Statistical Society. Series B (Methodological), pp.157-224, 1988.

Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu et al., A survey on security threats and defensive techniques of machine learning: a data driven view, IEEE access, vol.6, pp.12103-12117, 2018.

J. Lu, T. Issaranon, and D. Forsyth, Safetynet: Detecting and rejecting adversarial examples robustly, 2017 IEEE International Conference on Computer Vision (ICCV), pp.446-454, 2017.

A. L. Madsen, F. Jensen, U. B. Kjaerulff, and M. Lang, The hugin tool for probabilistic graphical models, International Journal on Artificial Intelligence Tools, vol.14, issue.03, pp.507-543, 2005.

S. Mei and X. Zhu, The security of latent dirichlet allocation, Artificial Intelligence and Statistics, pp.681-689, 2015.

S. Mei and X. Zhu, Using machine teaching to identify optimal training-set attacks on machine learners, AAAI, pp.2871-2877, 2015.

L. Muñoz-gonzález, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee et al., Towards poisoning of deep learning algorithms with backgradient optimization, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp.27-38, 2017.

B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein et al., Misleading learners: Co-opting your spam filter, Machine learning in cyber trust, pp.17-51, 2009.

T. D. Nielsen and F. V. Jensen, Bayesian networks and decision graphs, 2009.

K. G. Olesen, S. L. Lauritzen, and F. V. Jensen, ahugin: A system creating adaptive causal probabilistic networks, Uncertainty in Artificial Intelligence, pp.223-229, 1992.

A. Paudice, L. Muñoz-gonzález, A. Gyorgy, and E. C. Lupu, Detection of adversarial training examples in poisoning attacks through anomaly detection, 2018.

P. Spirtes, C. N. Glymour, and R. Scheines, Causation, prediction, and search, 2000.

Y. Wang and K. Chaudhuri, Data poisoning attacks against online learning, 2018.

C. Yang, Q. Wu, H. Li, and Y. Chen, Generative poisoning attack method against neural networks, 2017.

S. K. Yi, M. Steyvers, M. D. Lee, and M. J. Dry, The wisdom of the crowd in combinatorial problems, Cognitive science, vol.36, issue.3, pp.452-470, 2012.