Abstract : Many applications of AI, ranging from credit lending to medical diagnosis support through recidivism prediction, involve scoring individuals using a learned function of their attributes. These predictive risk scores are then used to take decisions based on whether the score exceeds a certain threshold, which may vary depending on the context. The level of delegation granted to such systems will heavily depend on how questions of fairness can be answered. In this paper, we study fairness for the problem of learning scoring functions from binary labeled data, a standard learning task known as bipartite ranking. We argue that the functional nature of the ROC curve, the gold standard measure of ranking performance in this context, leads to several ways of formulating fairness constraints. We introduce general classes of fairness definitions based on the AUC and on ROC curves, and establish generalization bounds for scoring functions learned under such constraints. Beyond the theoretical formulation and results, we design practical learning algorithms and illustrate our approach with numerical experiments on real and synthetic data.