Beyond L1: Faster and Better Sparse Models with skglm - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Beyond L1: Faster and Better Sparse Models with skglm

Résumé

We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties. Our algorithm is able to solve problems with millions of samples and features in seconds, by relying on coordinate descent, working sets and Anderson acceleration. It handles previously unaddressed models, and is extensively shown to improve state-of-art algorithms. We release skglm, a flexible, scikit-learn compatible package, which easily handles customized datafits and penalties.
Fichier principal
Vignette du fichier
main.pdf (1.13 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03819082 , version 1 (20-10-2022)

Identifiants

  • HAL Id : hal-03819082 , version 1

Citer

Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias. Beyond L1: Faster and Better Sparse Models with skglm. 36th Conference on Neural Information Processing Systems (NeurIPS 2022), Nov 2022, New Orleans, United States. ⟨hal-03819082⟩
79 Consultations
113 Téléchargements

Partager

Gmail Facebook X LinkedIn More