Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

An optimal gradient method for smooth (possibly strongly) convex minimization

Adrien Taylor 1 Yoel Drori 2
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : We present an optimal gradient method for smooth (possibly strongly) convex optimization. The method is optimal in the sense that its worst-case bound exactly matches the lower bound on the oracle complexity for the class of problems, meaning that no black-box first-order method can have a better worst-case guarantee without further assumptions on the class of problems at hand. The method is in some sense a generalization of the Optimized Gradient Method of Kim and Fessler (2016), and asymptotically corresponds to the Triple Momentum Method (Van Scoy et al., 2017), in the presence of strong convexity. Furthermore, the method is numerically stable to arbitrarily large condition numbers and admits a conceptually very simple proof, which involves a Lyapunov argument and a sum of two inequalities. Finally, we provide a numerical recipe for obtaining the algorithmic parameters of the method, using semidefinite programming, and illustrate that it can be used for developing other methods as well.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.inria.fr/hal-03154583
Contributor : Adrien Taylor <>
Submitted on : Monday, March 1, 2021 - 10:01:50 AM
Last modification on : Friday, March 5, 2021 - 3:28:57 AM

Links full text

Identifiers

  • HAL Id : hal-03154583, version 1
  • ARXIV : 2101.09741

Collections

Citation

Adrien Taylor, Yoel Drori. An optimal gradient method for smooth (possibly strongly) convex minimization. 2021. ⟨hal-03154583⟩

Share

Metrics

Record views

1527