Simulation-Based Parallel Training - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Simulation-Based Parallel Training

Résumé

Numerical simulations are ubiquitous in science and engineering. Machine learning for science investigates how artificial neural architectures can learn from these simulations to speed up scientific discovery and engineering processes. Most of these architectures are trained in a supervised manner. They require tremendous amounts of data from simulations that are slow to generate and memory greedy. In this article, we present our ongoing work to design a training framework that alleviates those bottlenecks. It generates data in parallel with the training process. Such simultaneity induces a bias in the data available during the training. We present a strategy to mitigate this bias with a memory buffer. We test our framework on the multi-parametric Lorenz's attractor. We show the benefit of our framework compared to offline training and the success of our data bias mitigation strategy to capture the complex chaotic dynamics of the system.
Fichier principal
Vignette du fichier
main.pdf (790.13 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03842106 , version 1 (07-11-2022)
hal-03842106 , version 2 (28-11-2022)

Identifiants

Citer

Lucas Meyer, Alejandro Ribés, Bruno Raffin. Simulation-Based Parallel Training. NeurIPS 2022 - AI for Science Workshop, Nov 2022, New Orleans, United States. ⟨hal-03842106v2⟩
74 Consultations
58 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More