On Sample Optimality in Personalized Collaborative and Federated Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

On Sample Optimality in Personalized Collaborative and Federated Learning

Mathieu Even
Kevin Scaman
  • Fonction : Auteur
  • PersonId : 1062981

Résumé

In personalized federated learning, each member of a potentially large set of agents aims to train a model minimizing its loss function averaged over its local data distribution. We study this problem under the lens of stochastic optimization, focusing on a scenario with a large number of agents, that each possess very few data samples from their local data distribution. Specifically, we prove novel matching lower and upper bounds on the number of samples required from all agents to approximately minimize the generalization error of a fixed agent. We provide strategies matching these lower bounds, based on a gradient filtering approach: given prior knowledge on some notion of distance between local data distributions, agents filter and aggregate stochastic gradients received from other agents, in order to achieve an optimal bias-variance trade-off. Finally, we quantify the impact of using rough estimations of the distances between local distributions of agents, based on a very small number of local samples.
Fichier principal
Vignette du fichier
9814_on_sample_optimality_in_person.pdf (532.62 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03902927 , version 1 (16-12-2022)

Identifiants

  • HAL Id : hal-03902927 , version 1

Citer

Mathieu Even, Laurent Massoulié, Kevin Scaman. On Sample Optimality in Personalized Collaborative and Federated Learning. NeurIPS 2022 - 36th Conference on Neural Information Processing System, Nov 2022, New Orleans, United States. ⟨hal-03902927⟩
83 Consultations
162 Téléchargements

Partager

Gmail Facebook X LinkedIn More