Understanding Scalability and Fine-Grain Parallelism of Synchronous Data Parallel Training - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Understanding Scalability and Fine-Grain Parallelism of Synchronous Data Parallel Training

Résumé

In the age of big data, deep learning has emerged as a powerful tool to extract insight and exploit its value, both in industry and scientific applications. With increasing complexity of learning models and amounts of training data, data-parallel approaches based on frequent all-reduce synchronization steps are increasingly popular. Despite the fact that high performance computing (HPC) technologies have been designed to address such patterns efficiently, the behavior of data-parallel approaches on HPC platforms is not well understood. To address this issue, in this paper we study the behavior of Horovod, a popular data parallel approach that relies on MPI, on Theta, a pre-Exascale machine at Argonne National Laboratory. Using two representative applications, we explore two aspects: (1) how performance and scalability is affected by important parameters such as number of nodes, number of workers, threads per node, batch size; (2) how computational phases are interleaved with all-reduce communication phases at fine granularity and what consequences this interleaving has in terms of potential bottlenecks. Our findings show that pipelining of back-propagation, gradient reduction and weight updates mitigate the effects of stragglers during all-reduce only partially. Furthermore, there can be significant delays between weight updates, which can be leveraged to mask the overhead of additional background operations that are coupled with the training.
Fichier principal
Vignette du fichier
Understanding_Scalability_and_Fine_Grain_Parallelism_of_Synchronous_Data_Parallel_Training.pdf (801.99 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02570148 , version 1 (11-05-2020)

Identifiants

Citer

Jiali Li, Bogdan Nicolae, Justin Wozniak, George Bosilca. Understanding Scalability and Fine-Grain Parallelism of Synchronous Data Parallel Training. 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), Nov 2019, Denver, United States. pp.1-8, ⟨10.1109/MLHPC49564.2019.00006⟩. ⟨hal-02570148⟩
65 Consultations
128 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More