Skip to Main content Skip to Navigation
Conference papers

Adaptive black-box optimization got easier: HCT only needs local smoothness

Xuedong Shang 1 Emilie Kaufmann 1 Michal Valko 1
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : Hierarchical bandits is an approach for global optimization of extremely irregular functions. This paper provides new elements regarding POO, an adaptive meta-algorithm that does not require the knowledge of local smoothness of the target function. We first highlight the fact that the subroutine algorithm used in POO should have a small regret under the assumption of local smoothness with respect to the chosen partitioning, which is unknown if it is satisfied by the standard subroutine HOO. In this work, we establish such regret guarantee for HCT, which is another hierarchical optimistic optimization algorithm that needs to know the smoothness. This confirms the validity of POO. We show that POO can be used with HCT as a subroutine with a regret upper bound that matches the one of best-known algorithms using the knowledge of smoothness up to a √ log n factor.
Document type :
Conference papers
Complete list of metadatas

Cited literature [1 references]  Display  Hide  Download
Contributor : Michal Valko <>
Submitted on : Friday, September 14, 2018 - 2:53:00 PM
Last modification on : Friday, March 22, 2019 - 1:36:43 AM
Document(s) archivé(s) le : Saturday, December 15, 2018 - 2:54:57 PM


Files produced by the author(s)


  • HAL Id : hal-01874637, version 1


Xuedong Shang, Emilie Kaufmann, Michal Valko. Adaptive black-box optimization got easier: HCT only needs local smoothness. European Workshop on Reinforcement Learning, Oct 2018, Lille, France. ⟨hal-01874637⟩



Record views


Files downloads