Optimal GPU-CPU Offloading Strategies for Deep Neural Network Training

Olivier Beaumont 1, 2 Lionel Eyraud-Dubois 1, 2 Alena Shilova 2, 1
2 HiePACS - High-End Parallel Algorithms for Challenging Numerical Simulations
LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest
Abstract : Training Deep Neural Networks is known to be an expensive operation, both in terms of computational cost and memory load. Indeed, during training, all intermediate layer outputs (called activations) computed during the forward phase must be stored until the corresponding gradient has been computed in the backward phase. These memory requirements sometimes prevent to consider larger batch sizes and deeper networks, so that they can limit both convergence speed and accuracy. Recent works have proposed to offload some of the computed forward activations from the memory of the GPU to the memory of the CPU. This requires to determine which activations should be offloaded and when these transfers from and to the memory of the GPU should take place. We prove that this problem is NP-hard in the strong sense, and we propose two heuristics based on relaxations of the problem. We perform extensive experimental evaluation on standard Deep Neural Networks. We compare the performance of our heuristics against previous approaches from the literature, showing that they achieve much better performance in a wide variety of situations.
Complete list of metadatas

Cited literature [21 references]  Display  Hide  Download

Contributor : Lionel Eyraud-Dubois <>
Submitted on : Monday, October 21, 2019 - 11:16:32 AM
Last modification on : Tuesday, October 22, 2019 - 1:33:39 AM


Files produced by the author(s)


  • HAL Id : hal-02316266, version 2



Olivier Beaumont, Lionel Eyraud-Dubois, Alena Shilova. Optimal GPU-CPU Offloading Strategies for Deep Neural Network Training. 2019. ⟨hal-02316266v2⟩



Record views


Files downloads