Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory

Olivier Beaumont 1, 2, 3 Lionel Eyraud-Dubois 1, 2, 3 Julien Herrmann 2, 3, 4 Alexis Joly 5, 6, 7 Alena Shilova 1, 2, 3
1 HiePACS - High-End Parallel Algorithms for Challenging Numerical Simulations
LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest
4 TADAAM - Topology-Aware System-Scale Data Management for High-Performance Computing
LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest
5 ZENITH - Scientific Data Management
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier, CRISAM - Inria Sophia Antipolis - Méditerranée
Abstract : This paper introduces a new activation checkpointing method which allows to significantly decrease memory usage when training Deep Neural Networks with the back-propagation algorithm. Similarly to checkpoint-ing techniques coming from the literature on Automatic Differentiation, it consists in dynamically selecting the forward activations that are saved during the training phase, and then automatically recomputing missing activations from those previously recorded. We propose an original computation model that combines two types of activation savings: either only storing the layer inputs, or recording the complete history of operations that produced the outputs (this uses more memory, but requires fewer recomputations in the backward phase), and we provide an algorithm to compute the optimal computation sequence for this model. This paper also describes a PyTorch implementation that processes the entire chain, dealing with any sequential DNN whose internal layers may be arbitrarily complex and automatically executing it according to the optimal checkpointing strategy computed given a memory limit. Through extensive experiments, we show that our implementation consistently outperforms existing checkpoint-ing approaches for a large class of networks, image sizes and batch sizes.
Complete list of metadatas

Cited literature [11 references]  Display  Hide  Download

https://hal.inria.fr/hal-02352969
Contributor : Lionel Eyraud-Dubois <>
Submitted on : Monday, November 25, 2019 - 2:24:09 PM
Last modification on : Thursday, January 9, 2020 - 9:56:07 AM

Files

RR-9302.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02352969, version 1
  • ARXIV : 1911.13214

Citation

Olivier Beaumont, Lionel Eyraud-Dubois, Julien Herrmann, Alexis Joly, Alena Shilova. Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory. [Research Report] RR-9302, Inria Bordeaux Sud-Ouest. 2019. ⟨hal-02352969⟩

Share

Metrics

Record views

74

Files downloads

56