HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation

Distributed-memory multi-GPU block-sparse tensor contraction for electronic structure

Abstract : Many domains of scientific simulation (chemistry, condensed matter physics,data science) increasingly eschew dense tensors for block-sparse tensors, sometimes with additional structure (recursive hierarchy, rank sparsity, etc.). Distributed-memory parallel computation with block-sparse tensorial data is paramount to minimize the time-to-solution (e.g.,to study dynamical problems or for real-time analysis) and to accommodate problems of realistic size that are too large to fit into the host/device memory of a single node equipped with accelerators. Unfortunately, computation with such irregular data structures is a poor match to the dominant imperative, bulk-synchronous parallel programming model. In this paper, we focus on the critical element of block-sparse tensoralgebra, namely binary tensor contraction, and report on an efficient and scalable implementation using the task-focused PaRSEC runtime. High performance of the block-sparse tensor contraction on the Summit supercomputer is demonstrated for synthetic data aswell as for real data involved in electronic structure simulations of unprecedented size.
Document type :
Complete list of metadata

Cited literature [49 references]  Display  Hide  Download

Contributor : Equipe Roma Connect in order to contact the contributor
Submitted on : Wednesday, June 17, 2020 - 8:57:50 PM
Last modification on : Monday, May 16, 2022 - 4:46:02 PM


Files produced by the author(s)


  • HAL Id : hal-02872813, version 1



Thomas Herault, Yves Robert, George Bosilca, Robert Harrison, Cannada Lewis, et al.. Distributed-memory multi-GPU block-sparse tensor contraction for electronic structure. [Research Report] RR-9353, Inria - Research Centre Grenoble – Rhône-Alpes. 2020. ⟨hal-02872813⟩



Record views


Files downloads