Skip to Main content Skip to Navigation
New interface
Conference papers

Towards Memory-Optimized Data Shuffling Patterns for Big Data Analytics

Abstract : Big data analytics is an indispensable tool in transforming science, engineering, medicine, healthcare, finance and ultimately business itself. With the explosion of data sizes and need for shorter time-to-solution, in-memory platforms such as Apache Spark gain increasing popularity. However, this introduces important challenges, among which data shuffling is particularly difficult: on one hand it is a key part of the computation that has a major impact on the overall performance and scalability so its efficiency is paramount, while on the other hand it needs to operate with scarce memory in order to leave as much memory available for data caching. In this context, efficient scheduling of data transfers such that it addresses both dimensions of the problem simultaneously is non-trivial. State-of-the-art solutions often rely on simple approaches that yield sub-optimal performance and resource usage. This paper contributes a novel shuffle data transfer strategy that dynamically adapts to the computation with minimal memory utilization, which we briefly underline as a series of design principles.
Complete list of metadata

Cited literature [15 references]  Display  Hide  Download
Contributor : Bogdan Nicolae Connect in order to contact the contributor
Submitted on : Monday, August 22, 2016 - 4:21:55 PM
Last modification on : Monday, February 7, 2022 - 4:06:03 PM
Long-term archiving on: : Wednesday, November 23, 2016 - 12:50:56 PM


Files produced by the author(s)



Bogdan Nicolae, Carlos Costa, Claudia Misale, Kostas Katrinis, Yoonho Park. Towards Memory-Optimized Data Shuffling Patterns for Big Data Analytics. CCGrid’16: 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 2016, Cartagena, Colombia. pp.409-412, ⟨10.1109/CCGrid.2016.85⟩. ⟨hal-01355227⟩



Record views


Files downloads