Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling - Archive ouverte HAL Access content directly
Conference Papers Year :

Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling

(1, 2) , (2, 1)
1
2

Abstract

Stochastic gradient methods enable learning probabilistic models from large amounts of data. While large step-sizes (learning rates) have shown to be best for least-squares (e.g., Gaussian noise) once combined with parameter averaging, these are not leading to con-vergent algorithms in general. In this paper , we consider generalized linear models, that is, conditional models based on exponential families. We propose averaging moment parameters instead of natural parameters for constant-step-size stochastic gradient descent. For finite-dimensional models, we show that this can sometimes (and surprisingly) lead to better predictions than the best linear model. For infinite-dimensional models, we show that it always converges to optimal predictions, while averaging natural parameters never does. We illustrate our findings with simulations on synthetic data and classical benchmarks with many observations.
Fichier principal
Vignette du fichier
Averaging_predictions.pdf (496.78 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01929810 , version 1 (21-11-2018)

Identifiers

Cite

Dmitry Babichev, Francis Bach. Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling. UAI 2018 - Conference on Uncertainty in Artificial Intelligence, Aug 2018, Monterey, United States. ⟨hal-01929810⟩
61 View
39 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More