In-Network Caching for Chip Multiprocessors

Abstract : Effective management of data is critical to the performance of emerging multi-core architectures. Our analysis of applications from SpecOMP reveal that a small fraction of shared addresses correspond to a large portion of accesses. Utilizing this observation, we propose a technique that augments a router in a onchip network with a small data store to reduce the memory access latency of the shared data. In the proposed technique, shared data from read response packets that pass through the router are cached in its data store to reduce number of hops required to service future read requests. Our limit study reveals that such caching has the potential to reduce memory access latency on an average by 27%. Further, two practical caching strategies are shown to reduce memory access latency by 14% and 17% respectively with a data store of just four entries at 2.5% area overhead.
Type de document :
Communication dans un congrès
André Seznec and Joel Emer and Mike O'Boyle and Margaret Martonosi and Theo Ungerer. HiPEAC 2009 - High Performance and Embedded Architectures and Compilers, Jan 2009, Paphos, Cyprus. Springer, 2009, 〈10.1007/978-3-540-92990-1_27〉
Liste complète des métadonnées

https://hal.inria.fr/inria-00446357
Contributeur : Ist Rennes <>
Soumis le : mardi 12 janvier 2010 - 15:34:15
Dernière modification le : lundi 2 octobre 2017 - 13:52:03

Identifiants

Collections

Citation

Aditya Yanamandra, Mary Jane Irwin, Vijaykrishnan Narayanan, Mahmut Kandemir, Sri Hari Krishna Narayanan. In-Network Caching for Chip Multiprocessors. André Seznec and Joel Emer and Mike O'Boyle and Margaret Martonosi and Theo Ungerer. HiPEAC 2009 - High Performance and Embedded Architectures and Compilers, Jan 2009, Paphos, Cyprus. Springer, 2009, 〈10.1007/978-3-540-92990-1_27〉. 〈inria-00446357〉

Partager

Métriques

Consultations de la notice

25