Skip to Main content Skip to Navigation
Journal articles

Perfectly Parallel Fairness Certification of Neural Networks

Abstract : Recently, there is growing concern that machine-learned software, which currently assists or even automates decision making, reproduces, and in the worst case reinforces, bias present in the training data. The development of tools and techniques for certifying fairness of this software or describing its biases is, therefore, critical. In this paper, we propose a perfectly parallel static analysis for certifying fairness of feed-forward neural networks used for classification of tabular data. When certification succeeds, our approach provides definite guarantees, otherwise, it describes and quantifies the biased input space regions. We design the analysis to be sound, in practice also exact, and configurable in terms of scalability and precision, thereby enabling pay-as-you-go certification. We implement our approach in an open-source tool called libra and demonstrate its effectiveness on neural networks trained on popular datasets.
Complete list of metadatas

https://hal.inria.fr/hal-03091870
Contributor : Urban Caterina <>
Submitted on : Thursday, December 31, 2020 - 6:44:15 PM
Last modification on : Monday, January 11, 2021 - 9:18:41 AM

File

OOPSLA2020.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Caterina Urban, Maria Christakis, Valentin Wüstholz, Fuyuan Zhang. Perfectly Parallel Fairness Certification of Neural Networks. Proceedings of the ACM on Programming Languages, ACM, 2020, 4 (OOPSLA), pp.1-30. ⟨10.1145/3428253⟩. ⟨hal-03091870⟩

Share

Metrics

Record views

12

Files downloads

78