Federated Learning Aggregation: New Robust Algorithms with Guarantees - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Federated Learning Aggregation: New Robust Algorithms with Guarantees

Résumé

Federated Learning has been recently proposed for distributed model training at the edge. The principle of this approach is to aggregate models learned on distributed clients to obtain a new more general "average" model (FedAvg). The resulting model is then redistributed to clients for further training. To date, the most popular federated learning algorithm uses coordinate-wise averaging of the model parameters for aggregation. In this paper, we carry out a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework. From this, we derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses. Moreover, we go beyond the assumptions introduced in theory, by evaluating the performance of these strategies and by comparing them with the one of FedAvg in classification tasks in both the IID and the Non-IID framework without additional hypothesis.
Fichier non déposé

Dates et versions

hal-03933597 , version 1 (10-01-2023)

Licence

Paternité

Identifiants

  • HAL Id : hal-03933597 , version 1

Citer

Adnan Ben Mansour, Gaia Carenini, Alexandre Duplessis, David Naccache. Federated Learning Aggregation: New Robust Algorithms with Guarantees. IEEE ICMLA 2022, Jan 2022, Bahamas, Bahamas. ⟨hal-03933597⟩
10 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More