Skip to Main content Skip to Navigation
Journal articles

Foreword to the Special Issue of the Twenty Sixth International Heterogeneity in Computing Workshop (HCW) and to the Fifteenth International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms (HeteroPar)

Jorge Barbosa 1 Emmanuel Jeannot 2
2 TADAAM - Topology-Aware System-Scale Data Management for High-Performance Computing
LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest
Abstract : Heterogeneity is emerging as one of the most profound and challenging characteristics of today's parallel environments. As most modern computing systems are heterogeneous, from the macro level, where networks of distributed computers, composed by diverse node architectures, are interconnected with potentially heterogeneous networks, to the micro level, where deeper memory hierarchies and various accelerator architectures are increasingly common, the impact of heterogeneity on all computing tasks is increasing rapidly. Traditional parallel algorithms, programming environments and tools, designed for legacy homogeneous multiprocessors, will at best achieve a small fraction of the efficiency and the potential performance that we should expect from parallel computing in tomorrow's highly diversified and mixed environments. New ideas, innovative algorithms, and specialized programming environments and tools are needed to efficiently use these new and multifarious parallel architectures. This special issue has the propose to collect extended version of contributions submitted to the Twenty Sixth International Heterogeneity in Computing Workshop (HCW) and to the Fifteenth International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms (HeteroPar). After a thorough peer-reviewing process, three papers were selected for publication. The topics addressed in this issue are the performance of cloud function providers, a cross-architecture Kalman filter to accelerate the reconstruction computation of particles collision and, a methodology to model applications jointly with a fast and greedy algorithm to obtain realist mapping and scheduling solution on heterogeneous systems. Cloud functions are becoming an increasingly popular method of running distributed applications, as they allow the developer for deploying their code in the form of a function to the cloud, which is then responsible for automatic resource provision and scaling. Kamil Figiela, Adam Gajek, Adam Zima, Beata Obrok and Maciej Malawski [Figiela, 2018] present a performance evaluation study of heterogeneous cloud functions where the major cloud function providers are evaluated, namely, AWS Lambda, Azure Functions, Google Cloud Functions and IBM Cloud Functions. At the LHCb detector in the Large Hadron Collider, the reconstruction of particle collisions in high-energy physics detectors happens at an average rate of 30 million times per second, being the Kalman filter a fundamental element in this process. Due to iterative enhancements in the detector's technology, together with the projected removal of the hardware filter, the rate of particles that will need to
Complete list of metadatas

https://hal.inria.fr/hal-01903118
Contributor : Emmanuel Jeannot <>
Submitted on : Wednesday, October 24, 2018 - 10:14:08 AM
Last modification on : Thursday, May 16, 2019 - 6:46:02 PM
Document(s) archivé(s) le : Friday, January 25, 2019 - 1:22:13 PM

File

editorial.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Jorge Barbosa, Emmanuel Jeannot. Foreword to the Special Issue of the Twenty Sixth International Heterogeneity in Computing Workshop (HCW) and to the Fifteenth International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms (HeteroPar). Concurrency and Computation: Practice and Experience, Wiley, 2018, pp.2. ⟨10.1002/cpe.5007⟩. ⟨hal-01903118⟩

Share

Metrics

Record views

72

Files downloads

218