Skip to Main content Skip to Navigation
Journal articles

On the Possibility of Learning in Reactive Environments with Arbitrary Dependence

Daniil Ryabko 1 Marcus Hutter 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO)MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown, but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are, and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.
Complete list of metadata
Contributor : Daniil Ryabko Connect in order to contact the contributor
Submitted on : Wednesday, November 9, 2011 - 3:26:51 PM
Last modification on : Saturday, December 18, 2021 - 3:02:27 AM

Links full text




Daniil Ryabko, Marcus Hutter. On the Possibility of Learning in Reactive Environments with Arbitrary Dependence. Theoretical Computer Science, Elsevier, 2008, 405 (3), pp.274-284. ⟨10.1016/j.tcs.2008.06.039⟩. ⟨hal-00639569⟩



Les métriques sont temporairement indisponibles