A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning

Résumé

Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.
Fichier principal
Vignette du fichier
main.pdf (3.29 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03672958 , version 1 (23-05-2022)

Identifiants

Citer

Eloïse Berthier, Ziad Kobeissi, Francis Bach. A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning. NeurIPS 2022 - Neural Information Processing Systems, Nov 2022, New Orleans (LA), United States. ⟨hal-03672958⟩
149 Consultations
75 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More