Skip to Main content Skip to Navigation
Journal articles

Resiliency in numerical algorithm design for extreme scale simulations

Emmanuel Agullo 1 Mirco Altenbernd 2 Hartwig Anzt 3 Leonardo Bautista-Gomez 4 Tommaso Benacchio 5 Luca Bonaventura 5 Hans-Joachim Bungartz 6 Sanjay Chatterjee 7 Florina M Ciorba 8 Nathan Debardeleben 9 Daniel Drzisga 6 Sebastian Eibl 10 Christian Engelmann 11 Wilfried N Gansterer 12 Luc Giraud 1 Dominik Göddeke 2 Marco Heisig 10 Fabienne Jézéquel 13, 14 Nils Kohl 10 Sherry Xiaoye 15 Romain Lion 16, 17 Miriam Mehl 2 Paul Mycek 18 Michael Obersteiner 6 Enrique S Quintana-Ortí 19 Francesco Rizzi 20 Ulrich Rüde 10 Martin Schulz 6 Fred Fung 21 Robert Speck 22 Linda Stals 21 Keita Teranishi 23 Samuel Thibault 16, 17 Dominik Thönnes 10 Andreas Wagner 6 Barbara Wohlmuth 6 
Abstract : This work is based on the seminar titled “Resiliency in Numerical Algorithm Design for Extreme Scale Simulations” held March 1-6, 2020 at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 hours on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications, and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.
Complete list of metadata
Contributor : Luc Giraud Connect in order to contact the contributor
Submitted on : Monday, September 20, 2021 - 9:10:32 AM
Last modification on : Tuesday, August 2, 2022 - 4:24:24 AM
Long-term archiving on: : Tuesday, December 21, 2021 - 6:13:45 PM


Files produced by the author(s)



Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, et al.. Resiliency in numerical algorithm design for extreme scale simulations. International Journal of High Performance Computing Applications, SAGE Publications, 2021, pp.10943420211055188. ⟨10.1177/10943420211055188⟩. ⟨hal-03348787⟩



Record views


Files downloads