Skip to Main content Skip to Navigation
Conference papers

Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU

Abstract : As a superior data analysis method, Machine Learning suffers the bottleneck from limited computing capability for many years. With the advent of numerous parallel computing hardwares, modern GPU is becoming a promising carrier for the tasks of Machine Learning. In this paper, we propose an efficient GPU execution framework to speedup the forward propagation process of convolution neural network. By extending the convolution unrolling method to fit this batch mode, we get a significant increase of throughput but very little overhead.
Document type :
Conference papers
Complete list of metadata
Contributor : Hal Ifip Connect in order to contact the contributor
Submitted on : Friday, November 25, 2016 - 2:39:47 PM
Last modification on : Wednesday, October 13, 2021 - 7:16:03 PM
Long-term archiving on: : Tuesday, March 21, 2017 - 1:52:33 AM


Files produced by the author(s)


Distributed under a Creative Commons Attribution 4.0 International License



Yuan Gao, Rui Wang, Ning An, Yanjiang Wei, Depei Qian. Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU. 11th IFIP International Conference on Network and Parallel Computing (NPC), Sep 2014, Ilan, Taiwan. pp.522-525, ⟨10.1007/978-3-662-44917-2_43⟩. ⟨hal-01403124⟩



Record views


Files downloads