Skip to Main content Skip to Navigation
Conference papers

Adaptive multi-fidelity optimization with fast learning rates

Côme Fiegel 1, 2 Victor Gabillon 3 Michal Valko 2 
2 Scool - Scool
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189
Abstract : In multi-fidelity optimization, we have access to biased approximations of varying costs of the target function. In this work, we study the setting of optimizing a locally smooth function with a limited budget Λ, where the learner has to make a trade-off between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions and improving prior results. Finally, we empirically show that our algorithm outperforms prior multi-fidelity optimization methods without the knowledge of problem-dependent parameters.
Document type :
Conference papers
Complete list of metadata
Contributor : Michal Valko Connect in order to contact the contributor
Submitted on : Friday, July 16, 2021 - 3:29:13 PM
Last modification on : Sunday, June 26, 2022 - 9:10:13 AM
Long-term archiving on: : Sunday, October 17, 2021 - 6:50:29 PM


Files produced by the author(s)


  • HAL Id : hal-03288879, version 1



Côme Fiegel, Victor Gabillon, Michal Valko. Adaptive multi-fidelity optimization with fast learning rates. International Conference on Artificial Intelligence and Statistics, 2020, Palermo, Italy. ⟨hal-03288879⟩



Record views


Files downloads