Grid Differentiated Services: a Reinforcement Learning Approach - Archive ouverte HAL Access content directly
Conference Papers Year : 2008

Grid Differentiated Services: a Reinforcement Learning Approach

(1) , (1) , (1, 2, 3) , (2)


Large scale production grids are a major case for autonomic computing. Following the classical definition of Kephart, an autonomic computing system should optimize its own behavior in accordance with high level guidance from humans. This central tenet of this paper is that the combination of utility functions and reinforcement learning (RL) can provide a general and efficient method for dynamically allocating grid resources in order to optimize the satisfaction of both endusers and participating institutions. The flexibility of an RLbased system allows to model the state of the grid, the jobs to be scheduled, and the high-level objectives of the various actors on the grid. RL-based scheduling can seamlessly adapt its decisions to changes in the distributions of inter-arrival time, QoS requirements, and resource availability. Moreover, it requires minimal prior knowledge about the target environment, including user requests and infrastructure. Our experimental results, both on a synthetic workload and a real trace, show that RL is not only a realistic alternative to empirical scheduler design, but is able to outperform them.
Fichier principal
Vignette du fichier
RLccg08.pdf (352.53 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

inria-00287826 , version 1 (13-06-2008)


  • HAL Id : inria-00287826 , version 1


Cécile Germain Renaud, Julien Perez, Balázs Kégl, C. Loomis. Grid Differentiated Services: a Reinforcement Learning Approach. 8th IEEE International Symposium on Cluster Computing and the Grid, May 2008, Lyon, France. ⟨inria-00287826⟩
201 View
306 Download


Gmail Facebook Twitter LinkedIn More