End-to-End Incremental Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

End-to-End Incremental Learning

Résumé

Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model---a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.
Fichier principal
Vignette du fichier
IncrementalLearning_ECCV2018.pdf (1.02 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01849366 , version 1 (26-07-2018)

Identifiants

Citer

Francisco M. Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, Karteek Alahari. End-to-End Incremental Learning. ECCV 2018 - European Conference on Computer Vision, Sep 2018, Munich, Germany. pp.241-257, ⟨10.1007/978-3-030-01258-8_15⟩. ⟨hal-01849366⟩
679 Consultations
552 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More