GPU Code Optimization using Abstract Kernel Emulation and Sensitivity Analysis

Abstract : In this paper, we develop an approach to GPU kernel optimization by focusing on identification of bottleneck resources and determining optimization parameters that can alleviate the bottleneck. Performance modeling for GPUs is done by abstract kernel emulation along with latency/gap modeling of resources. Sensitivity analysis with respect to resource latency/gap parameters is used to predict the bottleneck resource for a given kernel’s execution. The utility of the bottleneck analysis is demonstrated in two contexts: 1) Coupling the new bottleneck-driven optimization strategy with the OpenTuner auto-tuner: experimental results on all kernels from the Rodinia suite and GPU tensor contraction kernels from the NWChem computational chemistry suite demonstrate effectiveness. 2) Manual code optimization: two case studies illustrate the use of the bottleneck analysis to iteratively improve the performance of code from state-of-the-art domain-specific code generators.
Document type :
Conference papers
Complete list of metadatas

Cited literature [42 references]  Display  Hide  Download

https://hal.inria.fr/hal-01955475
Contributor : Fabrice Rastello <>
Submitted on : Friday, December 14, 2018 - 1:28:31 PM
Last modification on : Thursday, February 7, 2019 - 3:38:40 PM
Long-term archiving on : Friday, March 15, 2019 - 3:00:54 PM

File

saake-hal.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Changwan Hong, Aravind Sukumaran-Rajam, Jinsung Kim, Prashant Rawat, Sriram Krishnamoorthy, et al.. GPU Code Optimization using Abstract Kernel Emulation and Sensitivity Analysis. PLDI 2018 - 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, Jun 2018, Philadelphia, United States. pp.736-751, ⟨10.1145/3192366.3192397⟩. ⟨hal-01955475⟩

Share

Metrics

Record views

217

Files downloads

212