Skip to Main content Skip to Navigation
Conference papers

GPU Code Optimization using Abstract Kernel Emulation and Sensitivity Analysis

Abstract : In this paper, we develop an approach to GPU kernel optimization by focusing on identification of bottleneck resources and determining optimization parameters that can alleviate the bottleneck. Performance modeling for GPUs is done by abstract kernel emulation along with latency/gap modeling of resources. Sensitivity analysis with respect to resource latency/gap parameters is used to predict the bottleneck resource for a given kernel’s execution. The utility of the bottleneck analysis is demonstrated in two contexts: 1) Coupling the new bottleneck-driven optimization strategy with the OpenTuner auto-tuner: experimental results on all kernels from the Rodinia suite and GPU tensor contraction kernels from the NWChem computational chemistry suite demonstrate effectiveness. 2) Manual code optimization: two case studies illustrate the use of the bottleneck analysis to iteratively improve the performance of code from state-of-the-art domain-specific code generators.
Document type :
Conference papers
Complete list of metadata

Cited literature [42 references]  Display  Hide  Download
Contributor : Fabrice Rastello Connect in order to contact the contributor
Submitted on : Friday, December 14, 2018 - 1:28:31 PM
Last modification on : Tuesday, May 11, 2021 - 11:37:38 AM
Long-term archiving on: : Friday, March 15, 2019 - 3:00:54 PM


Files produced by the author(s)




Changwan Hong, Aravind Sukumaran-Rajam, Jinsung Kim, Prashant Rawat, Sriram Krishnamoorthy, et al.. GPU Code Optimization using Abstract Kernel Emulation and Sensitivity Analysis. PLDI 2018 - 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, Jun 2018, Philadelphia, United States. pp.736-751, ⟨10.1145/3192366.3192397⟩. ⟨hal-01955475⟩



Record views


Files downloads