HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

DLIR: An Intermediate Representation for Deep Learning Processors

Abstract : The Deep learning processor (DLP), especially ASIC-based accelerators, have been proved to be a promising device for accelerating the computation of deep learning algorithms. However, the learning cost of mastering these DLPs is high as they use different programming interfaces. On the other hand, many deep learning frameworks are proposed to ease the burden of developing deep learning algorithms, but few of them support DLPs. Due to the special features in DLPs, it is hard to integrate a DLP into existed frameworks.In this paper, we propose an intermediate representation (called DLIR) to bridge the gap between DL frameworks and DLPs. DLIR is a tensor-based language with built-in tensor intrinsics that can be directly mapped to hardware primitives. We show that DLIR allows better developing efficiency and is able to generate efficient code.
Document type :
Conference papers
Complete list of metadata

Cited literature [11 references]  Display  Hide  Download

Contributor : Hal Ifip Connect in order to contact the contributor
Submitted on : Thursday, September 5, 2019 - 1:31:15 PM
Last modification on : Thursday, September 5, 2019 - 1:35:33 PM
Long-term archiving on: : Thursday, February 6, 2020 - 7:22:04 AM


Files produced by the author(s)


Distributed under a Creative Commons Attribution 4.0 International License



Huiying Lan, Zidong Du. DLIR: An Intermediate Representation for Deep Learning Processors. 15th IFIP International Conference on Network and Parallel Computing (NPC), Nov 2018, Muroran, Japan. pp.169-173, ⟨10.1007/978-3-030-05677-3_19⟩. ⟨hal-02279553⟩



Record views


Files downloads