Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue ACM Transactions on Mathematical Software Année : 2024

Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory

Checkpointing optimal pour chaînes hétérogènes: apprentissage de réseaux de neurones profonds avec mémoire limitée

Résumé

Training in Feed Forward Deep Neural Networks is a memory-intensive operation which is usually performed on GPUs with limited memory capacities. This may force data scientists to limit the depth of the models or the resolution of the input data if data does not fit in the GPU memory. The re-materialization technique, whose idea comes from the checkpointing strategies developed in the Automatic Differentiation literature, allows data scientists to limit the memory requirements related to the storage of intermediate data (activations), at the cost of an increase in the computational cost. This paper introduces a new strategy of re-materialization of activations that significantly reduces memory usage. It consists in selecting which activations are saved and which activations are deleted during the forward phase, and then recomputing the deleted activations when they are needed during the backward phase. We propose an original computation model that combines two types of activation savings: either only storing the layer inputs, or recording the complete history of operations that produced the outputs. This paper focuses on the fully heterogeneous case, where the computation time and the memory requirement of each layer is different. We prove that finding the optimal solution is NP-hard and that classical techniques from Automatic Differentiation literature do not apply. Moreover, the classical assumption of memory persistence of materialized activations, used to simplify the search of optimal solutions, does not hold anymore. Thus, we propose a weak memory persistence property and provide a Dynamic Program to compute the optimal sequence of computations. This algorithm is made available through the \rotor software, a PyTorch plug-in dealing with any network consisting of a sequence of layers, each of them having an arbitrarily complex structure. Through extensive experiments, we show that our implementation consistently outperforms existing re-materialization approaches for a large class of networks, image sizes and batch sizes.
Fichier principal
Vignette du fichier
paper.pdf (913.4 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY NC - Paternité - Pas d'utilisation commerciale

Dates et versions

hal-02352969 , version 1 (25-11-2019)
hal-02352969 , version 2 (06-02-2024)

Licence

Paternité

Identifiants

Citer

Olivier Beaumont, Lionel Eyraud-Dubois, Julien Herrmann, Alexis Joly, Alena Shilova. Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory. ACM Transactions on Mathematical Software, inPress. ⟨hal-02352969v2⟩
734 Consultations
546 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More