Unbiasing Truncated Backpropagation Through Time

Yann Ollivier 1, 2, 3, 4 Corentin Tallec 1, 2
2 TAU - TAckling the Underspeficied
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
Abstract : Truncated Backpropagation Through Time (truncated BPTT) is a widespread method for learning recurrent computational graphs. Truncated BPTT keeps the computational benefits of Backpropagation Through Time (BPTT) while relieving the need for a complete backtrack through the whole data sequence at every step. However, truncation favors short-term dependencies: the gradient estimate of truncated BPTT is biased, so that it does not benefit from the convergence guarantees from stochastic gradient theory. We introduce Anticipated Reweighted Truncated Backpropagation (ARTBP), an algorithm that keeps the computational benefits of truncated BPTT, while providing unbiasedness. ARTBP works by using variable truncation lengths together with carefully chosen compensation factors in the backpropagation equation. We check the viability of ARTBP on two tasks. First, a simple synthetic task where careful balancing of temporal dependencies at different scales is needed: truncated BPTT displays unreliable performance, and in worst case scenarios, divergence, while ARTBP converges reliably. Second, on Penn Treebank character-level language modelling, ARTBP slightly outperforms truncated BPTT.
Type de document :
Pré-publication, Document de travail
2017
Liste complète des métadonnées

https://hal.inria.fr/hal-01660627
Contributeur : Yann Ollivier <>
Soumis le : lundi 11 décembre 2017 - 10:53:18
Dernière modification le : mardi 17 avril 2018 - 09:05:05

Lien texte intégral

Identifiants

  • HAL Id : hal-01660627, version 1
  • ARXIV : 1705.08209

Citation

Yann Ollivier, Corentin Tallec. Unbiasing Truncated Backpropagation Through Time. 2017. 〈hal-01660627〉

Partager

Métriques

Consultations de la notice

175