In-Network Caching for Chip Multiprocessors - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2009

In-Network Caching for Chip Multiprocessors

Résumé

Effective management of data is critical to the performance of emerging multi-core architectures. Our analysis of applications from SpecOMP reveal that a small fraction of shared addresses correspond to a large portion of accesses. Utilizing this observation, we propose a technique that augments a router in a onchip network with a small data store to reduce the memory access latency of the shared data. In the proposed technique, shared data from read response packets that pass through the router are cached in its data store to reduce number of hops required to service future read requests. Our limit study reveals that such caching has the potential to reduce memory access latency on an average by 27%. Further, two practical caching strategies are shown to reduce memory access latency by 14% and 17% respectively with a data store of just four entries at 2.5% area overhead.

Dates et versions

inria-00446357 , version 1 (12-01-2010)

Identifiants

Citer

Aditya Yanamandra, Mary Jane Irwin, Vijaykrishnan Narayanan, Mahmut Kandemir, Sri Hari Krishna Narayanan. In-Network Caching for Chip Multiprocessors. HiPEAC 2009 - High Performance and Embedded Architectures and Compilers, Jan 2009, Paphos, Cyprus. ⟨10.1007/978-3-540-92990-1_27⟩. ⟨inria-00446357⟩

Collections

HIPEAC09
30 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More