Skip to Main content Skip to Navigation
Conference papers

Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU

Abstract : As a superior data analysis method, Machine Learning suffers the bottleneck from limited computing capability for many years. With the advent of numerous parallel computing hardwares, modern GPU is becoming a promising carrier for the tasks of Machine Learning. In this paper, we propose an efficient GPU execution framework to speedup the forward propagation process of convolution neural network. By extending the convolution unrolling method to fit this batch mode, we get a significant increase of throughput but very little overhead.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/hal-01403124
Contributor : Hal Ifip <>
Submitted on : Friday, November 25, 2016 - 2:39:47 PM
Last modification on : Thursday, March 5, 2020 - 5:40:15 PM
Long-term archiving on: : Tuesday, March 21, 2017 - 1:52:33 AM

File

978-3-662-44917-2_43_Chapter.p...
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Yuan Gao, Rui Wang, Ning An, Yanjiang Wei, Depei Qian. Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU. 11th IFIP International Conference on Network and Parallel Computing (NPC), Sep 2014, Ilan, Taiwan. pp.522-525, ⟨10.1007/978-3-662-44917-2_43⟩. ⟨hal-01403124⟩

Share

Metrics

Record views

713

Files downloads

212