Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Poster Année : 2022

Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations

Résumé

In this paper, we propose the first verification method for attention robustness, i.e., the local robustness of the changes in the saliency-map against combinations of semantic perturbations. Specifically, our method determines the range of the perturbation parameters (e.g., the amount of brightness change) that maintains the difference between the actual saliency-map change and the expected saliency-map change below a given threshold value. Our method is based on linear activation region traversals, focusing on the outermost boundary of attention robustness for scalability on larger deep neural networks.
Fichier principal
Vignette du fichier
main.pdf (1.41 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03926269 , version 1 (06-01-2023)

Identifiants

  • HAL Id : hal-03926269 , version 1

Citer

Satoshi Munakata, Caterina Urban, Haruki Yokoyama, Koji Yamamoto, Kazuki Munakata. Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations. 29th Asia-Pacific Software Engineering Conference (APSEC 2022), Dec 2022, [Virtual], Japan. ⟨hal-03926269⟩
11 Consultations
12 Téléchargements

Partager

Gmail Facebook X LinkedIn More