Fast Reinforcement Learning with Large Action Sets Using Error-Correcting Output Codes for MDP Factorization

Gabriel Dulac-Arnold 1 Ludovic Denoyer 1 Philippe Preux 2 Patrick Gallinari 1
1 MALIRE - Machine Learning and Information Retrieval
LIP6 - Laboratoire d'Informatique de Paris 6
2 SEQUEL - Sequential Learning
LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe
Abstract : The use of Reinforcement Learning in real-world scenarios is strongly limited by issues of scale. Most RL learning algorithms are unable to deal with problems composed of hundreds or sometimes even dozens of possible actions, and therefore cannot be applied to many real-world problems. We consider the RL problem in the supervised classification framework where the optimal policy is obtained through a multiclass classifier, the set of classes being the set of actions of the problem. We introduce error-correcting output codes (ECOCs) in this setting and propose two new methods for reducing complexity when using rollouts-based approaches. The first method consists in using an ECOC-based classifier as the multiclass classifier, reducing the learning complexity from O(A2) to O(Alog(A)) . We then propose a novel method that profits from the ECOC's coding dictionary to split the initial MDP into O(log(A)) separate two-action MDPs. This second method reduces learning complexity even further, from O(A2) to O(log(A)) , thus rendering problems with large action sets tractable. We finish by experimentally demonstrating the advantages of our approach on a set of benchmark problems, both in speed and performance.
Type de document :
Communication dans un congrès
European Conference on Machine Learning, Sep 2012, Bristol, United Kingdom. Springer, Machine Learning and Knowledge Discovery in Databases, 7524, pp.180-194, 2012, Lecture Notes in Computer Science. <http://link.springer.com/chapter/10.1007/978-3-642-33486-3_12>. <10.1007/978-3-642-33486-3_12>


https://hal.inria.fr/hal-00747729
Contributeur : Preux Philippe <>
Soumis le : jeudi 8 novembre 2012 - 15:21:30
Dernière modification le : samedi 5 décembre 2015 - 02:03:10
Document(s) archivé(s) le : samedi 9 février 2013 - 03:41:39

Fichier

version.officielle.Springer.pd...
Fichiers éditeurs autorisés sur une archive ouverte

Identifiants

Collections

Citation

Gabriel Dulac-Arnold, Ludovic Denoyer, Philippe Preux, Patrick Gallinari. Fast Reinforcement Learning with Large Action Sets Using Error-Correcting Output Codes for MDP Factorization. European Conference on Machine Learning, Sep 2012, Bristol, United Kingdom. Springer, Machine Learning and Knowledge Discovery in Databases, 7524, pp.180-194, 2012, Lecture Notes in Computer Science. <http://link.springer.com/chapter/10.1007/978-3-642-33486-3_12>. <10.1007/978-3-642-33486-3_12>. <hal-00747729>

Exporter

Partager

Métriques

Consultations de
la notice

246

Téléchargements du document

91