Dissecting Causal Biases - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Preprints, Working Papers, ... Year : 2024

Dissecting Causal Biases

Abstract

Accurately measuring discrimination in machine learning-based automated decision systems is required to address the vital issue of fairness between subpopulations and/or individuals. Any bias in measuring discrimination can lead to either amplification or underestimation of the true value of discrimination. This paper focuses on a class of bias originating in the way training data is generated and/or collected. We call such class causal biases and use tools from the field of causality to formally define and analyze such biases. Four sources of bias are considered, namely, confounding, selection, measurement, and interaction. The main contribution of this paper is to provide, for each source of bias, a closed-form expression in terms of the model parameters. This makes it possible to analyze the behavior of each source of bias, in particular, in which cases they are absent and in which other cases they are maximized. We hope that the provided characterizations help the community better understand the sources of bias in machine learning applications.
Fichier principal
Vignette du fichier
Causal_Biases_Hal-23.pdf (5.16 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04329098 , version 1 (07-12-2023)
hal-04329098 , version 2 (21-01-2024)

Licence

Attribution

Identifiers

  • HAL Id : hal-04329098 , version 2

Cite

Rūta Binkytė, Sami Zhioua, Yassine Turki. Dissecting Causal Biases. 2024. ⟨hal-04329098v2⟩
60 View
29 Download

Share

Gmail Facebook X LinkedIn More