Towards Privacy Aware Deep Learning for Embedded Systems - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Towards Privacy Aware Deep Learning for Embedded Systems

Résumé

Memorization of training data by deep neural networks enables an adversary to mount successful membership inference attacks. Here, an adversary with blackbox query access to the model can infer whether an individual’s data record was part of the model’s sensitive training data using only the output predictions. This violates the data confidentiality, by inferring samples from proprietary training data, and privacy of the individual whose sensitive record was used to train the model. This privacy threat is profound in commercial embedded systems with on-device processing. Addressing this problem requires neural networks to be inherently private by design while conforming to the memory, power and computation constraints of embedded systems. This is lacking in literature. We present the first work towards membership privacy by design in neural networks while reconciling privacy-accuracy-efficiency trade-offs for embedded systems. We conduct an extensive privacy-centred neural network design space exploration to understand the membership privacy risks of well adopted state-of-the-art techniques: model compression via pruning, quantization, and off-the-shelf efficient architectures. We study the impact of model capacity on memorization of training data and show that compressed models (after retraining) leak more membership information compared to baseline uncompressed models while off-the-shelf architectures do not satisfy all efficiency requirements. Based on these observations, we identify quantization as a potential design choice to address the three dimensional trade-off. We propose Gecko training methodology where we explicitly add resistance to membership inference attacks as a design objective along with memory, computation, and power constraints of the embedded devices. We show that models trained using Gecko are comparable to prior defences against blackbox membership attacks in terms of accuracy and privacy while additionally providing efficiency. This enables Gecko models to be deployed on embeddedsystems while providing membership privacy.
Fichier principal
Vignette du fichier
Gecko (22).pdf (671.43 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03781979 , version 1 (20-09-2022)

Identifiants

  • HAL Id : hal-03781979 , version 1

Citer

Vasisht Duddu, Antoine Boutet, Virat Shejwalkar. Towards Privacy Aware Deep Learning for Embedded Systems. SAC 2022 - 37th ACM/SIGAPP Symposium on Applied Computing, Apr 2022, Virtual, France. pp.520-529. ⟨hal-03781979⟩
15 Consultations
53 Téléchargements

Partager

Gmail Facebook X LinkedIn More