Using PCI Pass-Through for GPU Virtualization with CUDA - Archive ouverte HAL Access content directly
Conference Papers Year : 2012

Using PCI Pass-Through for GPU Virtualization with CUDA

(1) , (1) , (1)
Chao-Tung Yang
  • Function : Author
  • PersonId : 1006179
Hsien-Yi Wang
  • Function : Author
  • PersonId : 1011287
Yu-Tso Liu
  • Function : Author
  • PersonId : 1011288


Nowadays, NVIDIA’s CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven quite successful at programming multithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. GPU-base clusters are likely to play an important role in future cloud data centers, because some compute-intensive applications may require both CPUs and GPUs. The goal of this paper is to develop a VM execution mechanism that could run these applications inside VMs and allow them to effectively leverage GPUs in such a way that different VMs can share GPUs without interfering with one another.
Fichier principal
Vignette du fichier
978-3-642-35606-3_53_Chapter.pdf (175.33 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01551356 , version 1 (30-06-2017)


Attribution - CC BY 4.0



Chao-Tung Yang, Hsien-Yi Wang, Yu-Tso Liu. Using PCI Pass-Through for GPU Virtualization with CUDA. 9th International Conference on Network and Parallel Computing (NPC), Sep 2012, Gwangju, South Korea. pp.445-452, ⟨10.1007/978-3-642-35606-3_53⟩. ⟨hal-01551356⟩
137 View
767 Download



Gmail Facebook Twitter LinkedIn More