Optimization Space Pruning without Regrets - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Optimization Space Pruning without Regrets

Résumé

Many computationally-intensive algorithms benefit from the wide parallelism offered by Graphical Processing Units (GPUs). However , the search for a close-to-optimal implementation remains extremely tedious due to the specialization and complexity of GPU architectures. We present a novel approach to automatically discover the best performing code from a given set of possible implementations. It involves a branch and bound algorithm with two distinctive features: (1) an analytic performance model of a lower bound on the execution time, and (2) the ability to estimate such bounds on a partially-specified implementation. The unique features of this performance model allow to aggressively prune the optimization space without eliminating the best performing implementation. While the space considered in this paper focuses on GPUs, the approach is generic enough to be applied to other architectures. We implemented our algorithm in a tool called Telamon and demonstrate its effectiveness on a huge, architecture-specific and input-sensitive optimization space. The information provided by the performance model also helps to identify ways to enrich the search space to consider better candidates, or to highlight architectural bottlenecks.
Fichier principal
Vignette du fichier
paper.pdf (875.54 Ko) Télécharger le fichier

Dates et versions

hal-01655602 , version 1 (05-12-2017)

Identifiants

Citer

Ulysse Beaugnon, Antoine Pouille, Marc Pouzet, Jacques Pienaar, Albert Cohen. Optimization Space Pruning without Regrets. CC 2017 - 26th International Conference on Compiler Construction, Feb 2017, Austin, TX, United States. pp.34-44, ⟨10.1145/3033019.3033023⟩. ⟨hal-01655602⟩
233 Consultations
592 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More