Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Learning Successor States and Goal-Dependent Values: A Mathematical Viewpoint

Léonard Blier 1, 2 Corentin Tallec 3 Yann Ollivier 1
2 TAU - TAckling the Underspecified
Inria Saclay - Ile de France, LRI - Laboratoire de Recherche en Informatique
Abstract : In reinforcement learning, temporal difference-based algorithms can be sample-inefficient: for instance, with sparse rewards, no learning occurs until a reward is observed. This can be remedied by learning richer objects, such as a model of the environment, or successor states. Successor states model the expected future state occupancy from any given state for a given policy and are related to goal-dependent value functions, which learn how to reach arbitrary states. We formally derive the temporal difference algorithm for successor state and goal-dependent value function learning, either for discrete or for continuous environments with function approximation. Especially, we provide finite-variance estimators even in continuous environments, where the reward for exactly reaching a goal state becomes infinitely sparse. Successor states satisfy more than just the Bellman equation: a backward Bellman operator and a Bellman-Newton (BN) operator encode path compositionality in the environment. The BN operator is akin to second-order gradient descent methods and provides the true update of the value function when acquiring more observations, with explicit tabular bounds. In the tabular case and with infinitesimal learning rates, mixing the usual and backward Bellman operators provably improves eigenvalues for asymptotic convergence, and the asymptotic convergence of the BN operator is provably better than TD, with a rate independent from the environment. However, the BN method is more complex and less robust to sampling noise. Finally, a forward-backward (FB) finite-rank parameterization of successor states enjoys reduced variance and improved samplability, provides a direct model of the value function, has fully understood fixed points corresponding to long-range dependencies, approximates the BN method, and provides two canonical representations of states as a byproduct.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.inria.fr/hal-03151901
Contributor : Marc Schoenauer <>
Submitted on : Thursday, February 25, 2021 - 10:21:13 AM
Last modification on : Wednesday, April 14, 2021 - 3:40:15 AM

Links full text

Identifiers

  • HAL Id : hal-03151901, version 1
  • ARXIV : 2101.07123

Citation

Léonard Blier, Corentin Tallec, Yann Ollivier. Learning Successor States and Goal-Dependent Values: A Mathematical Viewpoint. 2021. ⟨hal-03151901⟩

Share

Metrics

Record views

39