Skip to Main content Skip to Navigation
Conference papers

Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets

Abstract : Machine learning, and deep learning in particular, has seen tremendous advances and surpassed human-level performance on a number of tasks. Currently, machine learning is increasingly integrated in many applications and thereby, becomes part of everyday life, and automates decisions based on predictions. In certain domains, such as medical diagnosis, security, autonomous driving, and financial trading, wrong predictions can have a significant influence on individuals and groups. While advances in prediction accuracy have been impressive, machine learning systems still can make rather unexpected mistakes on relatively easy examples, and the robustness of algorithms has become a reason for concern before deploying such systems in real-world applications. Recent research has shown that especially deep neural networks are susceptible to adversarial attacks that can trigger such wrong predictions. For image analysis tasks, these attacks are in the form of small perturbations that remain (almost) imperceptible to human vision. Such attacks can cause a neural network classifier to completely change its prediction about an image, with the model even reporting a high confidence about the wrong prediction. Of particular interest for an attacker are so-called backdoor attacks, where a specific key is embedded into a data sample, to trigger a pre-defined class prediction. In this paper, we systematically evaluate the effectiveness of poisoning (backdoor) attacks on a number of benchmark datasets from the domain of autonomous driving.
Document type :
Conference papers
Complete list of metadata

Cited literature [19 references]  Display  Hide  Download

https://hal.inria.fr/hal-02520034
Contributor : Hal Ifip <>
Submitted on : Thursday, March 26, 2020 - 1:48:05 PM
Last modification on : Tuesday, March 31, 2020 - 3:50:53 PM
Long-term archiving on: : Saturday, June 27, 2020 - 2:19:25 PM

File

 Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed until : 2022-01-01

Please log in to resquest access to the document

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Huma Rehman, Andreas Ekelhart, Rudolf Mayer. Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.285-300, ⟨10.1007/978-3-030-29726-8_18⟩. ⟨hal-02520034⟩

Share

Metrics

Record views

57