Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Practical Riemannian Neural Networks

Gaétan Marceau-Caron 1 Yann Ollivier 2, 3
2 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
Abstract : We provide the first experimental results on non-synthetic datasets for the quasi-diagonal Riemannian gradient descents for neural networks introduced in [Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a previously unpublished electroencephalogram dataset. The quasi-diagonal Riemannian algorithms consistently beat simple stochastic gradient gradient descents by a varying margin. The computational overhead with respect to simple backpropagation is around a factor $2$. Perhaps more interestingly, these methods also reach their final performance quickly, thus requiring fewer training epochs and a smaller total computation time. We also present an implementation guide to these Riemannian gradient descents for neural networks, showing how the quasi-diagonal versions can be implemented with minimal effort on top of existing routines which compute gradients.
Complete list of metadata
Contributor : Marc Schoenauer Connect in order to contact the contributor
Submitted on : Monday, January 29, 2018 - 8:07:53 AM
Last modification on : Thursday, July 8, 2021 - 3:49:45 AM

Links full text


  • HAL Id : hal-01695102, version 1
  • ARXIV : 1602.08007


Gaétan Marceau-Caron, Yann Ollivier. Practical Riemannian Neural Networks. 2016. ⟨hal-01695102⟩



Les métriques sont temporairement indisponibles