Enlarged Krylov Subspace Conjugate Gradient Methods for Reducing Communication

Laura Grigori 1, 2 Sophie Moufawad 1, 2 Frédéric Nataf 1, 2
1 ALPINES - Algorithms and parallel tools for integrated numerical simulations
LJLL - Laboratoire Jacques-Louis Lions, Inria Paris-Rocquencourt, Institut National des Sciences Mathématiques et de leurs Interactions
Abstract : In this paper we introduce a new approach for reducing communication in Krylov subspace methods that consists of enlarging the Krylov subspace by a maximum of t vectors per iteration, based on the domain decomposition of the graph of A. The obtained enlarged Krylov subspace is a superset of the Krylov subspace. Thus it is possible to search for the solution of the system Ax=b in the enlarged Krylov subspace instead of the Krylov subspace. Moreover, we show in this paper that the enlarged Krylov projection subspace methods lead to faster convergence in terms of iterations and parallelizable algorithms with less communication, with respect to Krylov methods. In this paper we focus on Conjugate Gradient (CG), a Krylov projection method for symmetric (Hermitian) positive definite matrices. We discuss two new versions of Conjugate Gradient. The first method, multiple search direction with orthogonalization CG (MSDO-CG), is an adapted version of MSD-CG with the A-orthonormalization of the search directions to obtain a projection method that guarentees convergence at least as fast as CG. The second projection method that we propose here, long recurrence enlarged CG (LRE-CG), is similar to GMRES in that we build an orthonormal basis for the enlarged Krylov subspace rather than finding search directions. Then, we use the whole basis to update the solution and the residual. Both methods converge faster than CG in terms of iterations, but LRE-CG converges faster than MSDO-CG since it uses the whole basis to update the solution rather than only t search directions. And the more subdomains are introduced or the larger t is, the faster is the convergence of both methods with respect to CG in terms of iterations. For example, for t = 64 the MSDO-CG and LRE-CG methods converge in 75% up to 98% less iteration with respect to CG for the different test matrices. But increasing t also means increasing the memory requirements. Thus, in practice, t should be relatively small, depending on the available memory, on the size of the matrix, and on the number of iterations needed for convergence. We also present the parallel algorithms along with their expected performance based on the estimated run times, and the preconditioned versions with their convergence behavior.
Type de document :
Rapport
[Research Report] RR-8597, INRIA. 2014
Liste complète des métadonnées

Littérature citée [27 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01065985
Contributeur : Sophie Moufawad <>
Soumis le : jeudi 18 septembre 2014 - 21:02:16
Dernière modification le : mardi 17 avril 2018 - 11:24:59
Document(s) archivé(s) le : vendredi 14 avril 2017 - 16:21:31

Fichier

RR-8597.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01065985, version 1

Collections

Citation

Laura Grigori, Sophie Moufawad, Frédéric Nataf. Enlarged Krylov Subspace Conjugate Gradient Methods for Reducing Communication. [Research Report] RR-8597, INRIA. 2014. 〈hal-01065985〉

Partager

Métriques

Consultations de la notice

833

Téléchargements de fichiers

690