Co-scheduling HPC workloads on cache-partitioned CMP platforms - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Co-scheduling HPC workloads on cache-partitioned CMP platforms

Résumé

Co-scheduling techniques are used to improve the throughput of applications on chip multiprocessors (CMP), but sharing resources often generates critical interferences. We focus on the interferences in the last level of cache (LLC) and use the Cache Allocation Technology (CAT) recently provided by Intel to partition the LLC and give each co-scheduled application their own cache area. We consider m iterative HPC applications running concurrently and answer the following questions: (i) how to precisely model the behavior of these applications on the cache partitioned platform? and (ii) how many cores and cache fractions should be assigned to each application to maximize the platform efficiency? Here, platform efficiency is defined as maximizing the performance either globally, or as guaranteeing a fixed ratio of iterations per second for each application. Through extensive experiments using CAT, we demonstrate the impact of cache partitioning when multiple HPC application are co-scheduled onto CMP platforms.
Fichier principal
Vignette du fichier
cluster18.pdf (425.62 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01874154 , version 1 (14-09-2018)

Identifiants

  • HAL Id : hal-01874154 , version 1

Citer

Guillaume Aupy, Anne Benoit, Brice Goglin, Loïc Pottier, Yves Robert. Co-scheduling HPC workloads on cache-partitioned CMP platforms. IEEE Cluster 2018, Sep 2018, Belfast, United Kingdom. pp.335-345. ⟨hal-01874154⟩
744 Consultations
123 Téléchargements

Partager

Gmail Facebook X LinkedIn More