VICReg: Variance-Invariance-Covariance Regularization For Self-Supervised Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

VICReg: Variance-Invariance-Covariance Regularization For Self-Supervised Learning

Résumé

Recent self-supervised methods for image representation learning maximize the agreement between embedding vectors produced by encoders fed with different views of the same image. The main challenge is to prevent a collapse in which the encoders produce constant or non-informative vectors. We introduce VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with two regularizations terms applied to both embeddings separately: (1) a term that maintains the variance of each embedding dimension above a threshold, (2) a term that decorrelates each pair of variables. Unlike most other approaches to the same problem, VICReg does not require techniques such as: weight sharing between the branches, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves results on par with the state of the art on several downstream tasks. In addition, we show that our variance regularization term stabilizes the training of other methods and leads to performance improvements.
Fichier principal
Vignette du fichier
vicreg_iclr_2022.pdf (640.86 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03541297 , version 1 (24-01-2022)

Identifiants

Citer

Adrien Bardes, Jean Ponce, Yann Lecun. VICReg: Variance-Invariance-Covariance Regularization For Self-Supervised Learning. ICLR 2022 - International Conference on Learning Representations, Apr 2022, Online, United States. ⟨hal-03541297⟩
532 Consultations
654 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More