Skip to Main content Skip to Navigation
Conference papers

Detecting Adversarial Attacks in the Context of Bayesian Networks

Abstract : In this research, we study data poisoning attacks against Bayesian network structure learning algorithms. We propose to use the distance between Bayesian network models and the value of data conflict to detect data poisoning attacks. We propose a 2-layered framework that detects both one-step and long-duration data poisoning attacks. Layer 1 enforces “reject on negative impacts” detection; i.e., input that changes the Bayesian network model is labeled potentially malicious. Layer 2 aims to detect long-duration attacks; i.e., observations in the incoming data that conflict with the original Bayesian model. We show that for a typical small Bayesian network, only a few contaminated cases are needed to corrupt the learned structure. Our detection methods are effective against not only one-step attacks but also sophisticated long-duration attacks. We also present our empirical results.
Document type :
Conference papers
Complete list of metadata

Cited literature [37 references]  Display  Hide  Download
Contributor : Hal Ifip Connect in order to contact the contributor
Submitted on : Thursday, November 28, 2019 - 2:25:14 PM
Last modification on : Wednesday, November 3, 2021 - 6:03:34 AM
Long-term archiving on: : Saturday, February 29, 2020 - 4:20:14 PM


Files produced by the author(s)


Distributed under a Creative Commons Attribution 4.0 International License



Emad Alsuwat, Hatim Alsuwat, John Rose, Marco Valtorta, Csilla Farkas. Detecting Adversarial Attacks in the Context of Bayesian Networks. 33th IFIP Annual Conference on Data and Applications Security and Privacy (DBSec), Jul 2019, Charleston, SC, United States. pp.3-22, ⟨10.1007/978-3-030-22479-0_1⟩. ⟨hal-02384585⟩



Record views


Files downloads