Skip to Main content Skip to Navigation
New interface
Conference papers

Near-linear time Gaussian process optimization with adaptive batching and resparsification

Abstract : Gaussian processes (GP) are one of the most successful frameworks to model uncertainty. However , GP optimization (e.g., GP-UCB) suffers from major scalability issues. Experimental time grows linearly with the number of evaluations, unless candidates are selected in batches (e.g., using GP-BUCB) and evaluated in parallel. Furthermore , computational cost is often prohibitive since algorithms such as GP-BUCB require a time at least quadratic in the number of dimensions and iterations to select each batch. In this paper, we introduce BBKB (Batch Budgeted Kernel Bandits), the first no-regret GP optimization algorithm that provably runs in near-linear time and selects candidates in batches. This is obtained with a new guarantee for the tracking of the posterior variances that allows BBKB to choose increasingly larger batches, improving over GP-BUCB. Moreover , we show that the same bound can be used to adaptively delay costly updates to the sparse GP approximation used by BBKB, achieving a near-constant per-step amortized cost. These findings are then confirmed in several experiments, where BBKB is much faster than state-of-the-art methods.
Document type :
Conference papers
Complete list of metadata
Contributor : Michal Valko Connect in order to contact the contributor
Submitted on : Sunday, September 27, 2020 - 8:58:35 AM
Last modification on : Wednesday, September 30, 2020 - 3:34:08 AM
Long-term archiving on: : Thursday, December 3, 2020 - 6:33:15 PM


Files produced by the author(s)


  • HAL Id : hal-02950066, version 1


Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco. Near-linear time Gaussian process optimization with adaptive batching and resparsification. International Conference on Machine Learning, 2020, Vienna, Austria. ⟨hal-02950066⟩



Record views


Files downloads