Skip to Main content Skip to Navigation
Conference papers

Detecting Adversarial Attacks in the Context of Bayesian Networks

Abstract : In this research, we study data poisoning attacks against Bayesian network structure learning algorithms. We propose to use the distance between Bayesian network models and the value of data conflict to detect data poisoning attacks. We propose a 2-layered framework that detects both one-step and long-duration data poisoning attacks. Layer 1 enforces “reject on negative impacts” detection; i.e., input that changes the Bayesian network model is labeled potentially malicious. Layer 2 aims to detect long-duration attacks; i.e., observations in the incoming data that conflict with the original Bayesian model. We show that for a typical small Bayesian network, only a few contaminated cases are needed to corrupt the learned structure. Our detection methods are effective against not only one-step attacks but also sophisticated long-duration attacks. We also present our empirical results.
Document type :
Conference papers
Complete list of metadatas

Cited literature [37 references]  Display  Hide  Download

https://hal.inria.fr/hal-02384585
Contributor : Hal Ifip <>
Submitted on : Thursday, November 28, 2019 - 2:25:14 PM
Last modification on : Thursday, November 28, 2019 - 2:29:15 PM
Long-term archiving on: : Saturday, February 29, 2020 - 4:20:14 PM

File

 Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed until : 2022-01-01

Please log in to resquest access to the document

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Emad Alsuwat, Hatim Alsuwat, John Rose, Marco Valtorta, Csilla Farkas. Detecting Adversarial Attacks in the Context of Bayesian Networks. 33th IFIP Annual Conference on Data and Applications Security and Privacy (DBSec), Jul 2019, Charleston, SC, United States. pp.3-22, ⟨10.1007/978-3-030-22479-0_1⟩. ⟨hal-02384585⟩

Share

Metrics

Record views

54