Rethinking deep active learning: Using unlabeled data at model training - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Rethinking deep active learning: Using unlabeled data at model training

Résumé

Active learning typically focuses on training a model on few labeled examples alone, while unlabeled ones are only used for acquisition. In this work we depart from this setting by using both labeled and unlabeled data during model training across active learning cycles. We do so by using unsupervised feature learning at the beginning of the active learning pipeline and semi-supervised learning at every active learning cycle, on all available data. The former has not been investigated before in active learning, while the study of latter in the context of deep learning is scarce and recent findings are not conclusive with respect to its benefit. Our idea is orthogonal to acquisition strategies by using more data, much like ensemble methods use more models. By systematically evaluating on a number of popular acquisition strategies and datasets, we find that the use of unla-beled data during model training brings a surprising accuracy improvement in image classification, compared to the differences between acquisition strategies. We thus explore smaller label budgets, even one label per class.
Fichier principal
Vignette du fichier
1911.08177.pdf (693.32 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02372102 , version 1 (20-11-2019)

Identifiants

Citer

Oriane Siméoni, Mateusz Budnik, Yannis Avrithis, Guillaume Gravier. Rethinking deep active learning: Using unlabeled data at model training. ICPR 2020 - 25th International Conference on Pattern Recognition, Jan 2021, Milan, Italy. pp.1-12. ⟨hal-02372102⟩
102 Consultations
346 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More