Abstract : Reproducing experimental results is nowadays seen as one of the greatest impairments for the progress of science in general and distributed systems in particular. This stems from the increasing complexity of the systems under study and the inherent complexity of capturing and controlling all variables that can potentially affect experimental results. We argue that this can only be addressed with a systematic approach to all the stages and aspects of the evaluation process, such as the environment in which the experiment is run, the configuration and software versions used, and the network characteristics among others. In this tutorial paper, we focus on the networking aspect, and discuss our ongoing research efforts and tools to contribute to a more systematic and reproducible evaluation of large scale distributed systems.
https://hal.inria.fr/hal-03223255 Contributor : Hal IfipConnect in order to contact the contributor Submitted on : Monday, May 10, 2021 - 5:41:19 PM Last modification on : Monday, May 10, 2021 - 5:46:42 PM Long-term archiving on: : Wednesday, August 11, 2021 - 8:07:45 PM
File
Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed
until : 2023-01-01
Miguel Matos. Kollaps/Thunderstorm: Reproducible Evaluation of Distributed Systems. 20th IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS), Jun 2020, Valletta, Malta. pp.121-128, ⟨10.1007/978-3-030-50323-9_8⟩. ⟨hal-03223255⟩